00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 627 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3287 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.113 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.113 The recommended git tool is: git 00:00:00.114 using credential 00000000-0000-0000-0000-000000000002 00:00:00.117 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.147 Fetching changes from the remote Git repository 00:00:00.150 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.186 Using shallow fetch with depth 1 00:00:00.186 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.186 > git --version # timeout=10 00:00:00.215 > git --version # 'git version 2.39.2' 00:00:00.215 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.227 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.227 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.929 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.941 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.954 Checking out Revision 16485855f227725e8e9566ee24d00b82aaeff0db (FETCH_HEAD) 00:00:05.954 > git config core.sparsecheckout # timeout=10 00:00:05.969 > git read-tree -mu HEAD # timeout=10 00:00:05.989 > git checkout -f 16485855f227725e8e9566ee24d00b82aaeff0db # timeout=5 00:00:06.009 Commit message: "ansible/inventory: fix WFP37 mac address" 00:00:06.010 > git rev-list --no-walk 16485855f227725e8e9566ee24d00b82aaeff0db # timeout=10 00:00:06.091 [Pipeline] Start of Pipeline 00:00:06.106 [Pipeline] library 00:00:06.108 Loading library shm_lib@master 00:00:06.108 Library shm_lib@master is cached. Copying from home. 00:00:06.126 [Pipeline] node 00:29:40.473 Still waiting to schedule task 00:29:40.473 Waiting for next available executor on ‘vagrant-vm-host’ 00:58:05.051 Running on VM-host-WFP1 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:58:05.053 [Pipeline] { 00:58:05.070 [Pipeline] catchError 00:58:05.072 [Pipeline] { 00:58:05.089 [Pipeline] wrap 00:58:05.103 [Pipeline] { 00:58:05.115 [Pipeline] stage 00:58:05.117 [Pipeline] { (Prologue) 00:58:05.143 [Pipeline] echo 00:58:05.145 Node: VM-host-WFP1 00:58:05.152 [Pipeline] cleanWs 00:58:05.162 [WS-CLEANUP] Deleting project workspace... 00:58:05.162 [WS-CLEANUP] Deferred wipeout is used... 00:58:05.169 [WS-CLEANUP] done 00:58:05.343 [Pipeline] setCustomBuildProperty 00:58:05.435 [Pipeline] httpRequest 00:58:05.461 [Pipeline] echo 00:58:05.463 Sorcerer 10.211.164.101 is alive 00:58:05.474 [Pipeline] httpRequest 00:58:05.479 HttpMethod: GET 00:58:05.480 URL: http://10.211.164.101/packages/jbp_16485855f227725e8e9566ee24d00b82aaeff0db.tar.gz 00:58:05.480 Sending request to url: http://10.211.164.101/packages/jbp_16485855f227725e8e9566ee24d00b82aaeff0db.tar.gz 00:58:05.481 Response Code: HTTP/1.1 200 OK 00:58:05.482 Success: Status code 200 is in the accepted range: 200,404 00:58:05.482 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_16485855f227725e8e9566ee24d00b82aaeff0db.tar.gz 00:58:05.627 [Pipeline] sh 00:58:05.907 + tar --no-same-owner -xf jbp_16485855f227725e8e9566ee24d00b82aaeff0db.tar.gz 00:58:05.922 [Pipeline] httpRequest 00:58:05.938 [Pipeline] echo 00:58:05.939 Sorcerer 10.211.164.101 is alive 00:58:05.946 [Pipeline] httpRequest 00:58:05.949 HttpMethod: GET 00:58:05.950 URL: http://10.211.164.101/packages/spdk_8fb860b73ad34fc27668faa56efd7776760ea187.tar.gz 00:58:05.950 Sending request to url: http://10.211.164.101/packages/spdk_8fb860b73ad34fc27668faa56efd7776760ea187.tar.gz 00:58:05.951 Response Code: HTTP/1.1 200 OK 00:58:05.952 Success: Status code 200 is in the accepted range: 200,404 00:58:05.952 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_8fb860b73ad34fc27668faa56efd7776760ea187.tar.gz 00:58:08.248 [Pipeline] sh 00:58:08.539 + tar --no-same-owner -xf spdk_8fb860b73ad34fc27668faa56efd7776760ea187.tar.gz 00:58:11.083 [Pipeline] sh 00:58:11.365 + git -C spdk log --oneline -n5 00:58:11.365 8fb860b73 test/dd: check spdk_dd direct link to liburing 00:58:11.365 89648519b bdev/compress: Output the pm_path entry for bdev_get_bdevs() 00:58:11.365 a1a2e2b48 nvme/pcie: add debug print for number of SGL/PRP entries 00:58:11.365 8b5c4be8b nvme/fio_plugin: add support for the disable_pcie_sgl_merge option 00:58:11.365 e431ba2e4 nvme/pcie: add disable_pcie_sgl_merge option 00:58:11.389 [Pipeline] withCredentials 00:58:11.400 > git --version # timeout=10 00:58:11.414 > git --version # 'git version 2.39.2' 00:58:11.430 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:58:11.433 [Pipeline] { 00:58:11.446 [Pipeline] retry 00:58:11.449 [Pipeline] { 00:58:11.474 [Pipeline] sh 00:58:11.759 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:58:12.706 [Pipeline] } 00:58:12.730 [Pipeline] // retry 00:58:12.736 [Pipeline] } 00:58:12.759 [Pipeline] // withCredentials 00:58:12.769 [Pipeline] httpRequest 00:58:12.786 [Pipeline] echo 00:58:12.788 Sorcerer 10.211.164.101 is alive 00:58:12.799 [Pipeline] httpRequest 00:58:12.803 HttpMethod: GET 00:58:12.803 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:58:12.804 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:58:12.805 Response Code: HTTP/1.1 200 OK 00:58:12.805 Success: Status code 200 is in the accepted range: 200,404 00:58:12.806 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:58:14.036 [Pipeline] sh 00:58:14.315 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:58:15.716 [Pipeline] sh 00:58:15.994 + git -C dpdk log --oneline -n5 00:58:15.994 eeb0605f11 version: 23.11.0 00:58:15.994 238778122a doc: update release notes for 23.11 00:58:15.994 46aa6b3cfc doc: fix description of RSS features 00:58:15.994 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:58:15.994 7e421ae345 devtools: support skipping forbid rule check 00:58:16.013 [Pipeline] writeFile 00:58:16.031 [Pipeline] sh 00:58:16.313 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:58:16.325 [Pipeline] sh 00:58:16.604 + cat autorun-spdk.conf 00:58:16.604 SPDK_RUN_FUNCTIONAL_TEST=1 00:58:16.604 SPDK_TEST_NVMF=1 00:58:16.604 SPDK_TEST_NVMF_TRANSPORT=tcp 00:58:16.604 SPDK_TEST_URING=1 00:58:16.604 SPDK_TEST_USDT=1 00:58:16.604 SPDK_RUN_UBSAN=1 00:58:16.604 NET_TYPE=virt 00:58:16.604 SPDK_TEST_NATIVE_DPDK=v23.11 00:58:16.604 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:58:16.604 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:58:16.611 RUN_NIGHTLY=1 00:58:16.613 [Pipeline] } 00:58:16.630 [Pipeline] // stage 00:58:16.645 [Pipeline] stage 00:58:16.647 [Pipeline] { (Run VM) 00:58:16.661 [Pipeline] sh 00:58:16.941 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:58:16.941 + echo 'Start stage prepare_nvme.sh' 00:58:16.941 Start stage prepare_nvme.sh 00:58:16.941 + [[ -n 7 ]] 00:58:16.941 + disk_prefix=ex7 00:58:16.941 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:58:16.941 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:58:16.941 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:58:16.941 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:58:16.941 ++ SPDK_TEST_NVMF=1 00:58:16.941 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:58:16.941 ++ SPDK_TEST_URING=1 00:58:16.941 ++ SPDK_TEST_USDT=1 00:58:16.941 ++ SPDK_RUN_UBSAN=1 00:58:16.941 ++ NET_TYPE=virt 00:58:16.941 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:58:16.941 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:58:16.941 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:58:16.941 ++ RUN_NIGHTLY=1 00:58:16.941 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:58:16.941 + nvme_files=() 00:58:16.941 + declare -A nvme_files 00:58:16.941 + backend_dir=/var/lib/libvirt/images/backends 00:58:16.941 + nvme_files['nvme.img']=5G 00:58:16.941 + nvme_files['nvme-cmb.img']=5G 00:58:16.941 + nvme_files['nvme-multi0.img']=4G 00:58:16.941 + nvme_files['nvme-multi1.img']=4G 00:58:16.941 + nvme_files['nvme-multi2.img']=4G 00:58:16.941 + nvme_files['nvme-openstack.img']=8G 00:58:16.941 + nvme_files['nvme-zns.img']=5G 00:58:16.941 + (( SPDK_TEST_NVME_PMR == 1 )) 00:58:16.941 + (( SPDK_TEST_FTL == 1 )) 00:58:16.941 + (( SPDK_TEST_NVME_FDP == 1 )) 00:58:16.941 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:58:16.941 + for nvme in "${!nvme_files[@]}" 00:58:16.941 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:58:16.941 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:58:16.941 + for nvme in "${!nvme_files[@]}" 00:58:16.941 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:58:16.941 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:58:16.941 + for nvme in "${!nvme_files[@]}" 00:58:16.941 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:58:16.941 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:58:16.941 + for nvme in "${!nvme_files[@]}" 00:58:16.941 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:58:16.941 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:58:16.941 + for nvme in "${!nvme_files[@]}" 00:58:16.941 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:58:16.941 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:58:16.941 + for nvme in "${!nvme_files[@]}" 00:58:16.941 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:58:17.200 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:58:17.200 + for nvme in "${!nvme_files[@]}" 00:58:17.200 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:58:17.200 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:58:17.200 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:58:17.200 + echo 'End stage prepare_nvme.sh' 00:58:17.200 End stage prepare_nvme.sh 00:58:17.212 [Pipeline] sh 00:58:17.492 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:58:17.492 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora38 00:58:17.492 00:58:17.492 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:58:17.492 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:58:17.492 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:58:17.492 HELP=0 00:58:17.492 DRY_RUN=0 00:58:17.492 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:58:17.492 NVME_DISKS_TYPE=nvme,nvme, 00:58:17.492 NVME_AUTO_CREATE=0 00:58:17.492 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:58:17.492 NVME_CMB=,, 00:58:17.492 NVME_PMR=,, 00:58:17.492 NVME_ZNS=,, 00:58:17.492 NVME_MS=,, 00:58:17.492 NVME_FDP=,, 00:58:17.492 SPDK_VAGRANT_DISTRO=fedora38 00:58:17.492 SPDK_VAGRANT_VMCPU=10 00:58:17.492 SPDK_VAGRANT_VMRAM=12288 00:58:17.492 SPDK_VAGRANT_PROVIDER=libvirt 00:58:17.492 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:58:17.492 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:58:17.492 SPDK_OPENSTACK_NETWORK=0 00:58:17.492 VAGRANT_PACKAGE_BOX=0 00:58:17.492 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:58:17.492 FORCE_DISTRO=true 00:58:17.492 VAGRANT_BOX_VERSION= 00:58:17.492 EXTRA_VAGRANTFILES= 00:58:17.492 NIC_MODEL=e1000 00:58:17.492 00:58:17.492 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:58:17.492 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:58:20.024 Bringing machine 'default' up with 'libvirt' provider... 00:58:20.961 ==> default: Creating image (snapshot of base box volume). 00:58:21.220 ==> default: Creating domain with the following settings... 00:58:21.220 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721645725_ab070351a52c4af2ad5e 00:58:21.220 ==> default: -- Domain type: kvm 00:58:21.220 ==> default: -- Cpus: 10 00:58:21.220 ==> default: -- Feature: acpi 00:58:21.220 ==> default: -- Feature: apic 00:58:21.220 ==> default: -- Feature: pae 00:58:21.220 ==> default: -- Memory: 12288M 00:58:21.220 ==> default: -- Memory Backing: hugepages: 00:58:21.220 ==> default: -- Management MAC: 00:58:21.220 ==> default: -- Loader: 00:58:21.220 ==> default: -- Nvram: 00:58:21.220 ==> default: -- Base box: spdk/fedora38 00:58:21.220 ==> default: -- Storage pool: default 00:58:21.220 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721645725_ab070351a52c4af2ad5e.img (20G) 00:58:21.220 ==> default: -- Volume Cache: default 00:58:21.220 ==> default: -- Kernel: 00:58:21.220 ==> default: -- Initrd: 00:58:21.220 ==> default: -- Graphics Type: vnc 00:58:21.220 ==> default: -- Graphics Port: -1 00:58:21.220 ==> default: -- Graphics IP: 127.0.0.1 00:58:21.220 ==> default: -- Graphics Password: Not defined 00:58:21.220 ==> default: -- Video Type: cirrus 00:58:21.220 ==> default: -- Video VRAM: 9216 00:58:21.220 ==> default: -- Sound Type: 00:58:21.220 ==> default: -- Keymap: en-us 00:58:21.220 ==> default: -- TPM Path: 00:58:21.220 ==> default: -- INPUT: type=mouse, bus=ps2 00:58:21.220 ==> default: -- Command line args: 00:58:21.220 ==> default: -> value=-device, 00:58:21.220 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:58:21.220 ==> default: -> value=-drive, 00:58:21.220 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:58:21.220 ==> default: -> value=-device, 00:58:21.220 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:58:21.220 ==> default: -> value=-device, 00:58:21.220 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:58:21.220 ==> default: -> value=-drive, 00:58:21.220 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:58:21.220 ==> default: -> value=-device, 00:58:21.220 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:58:21.220 ==> default: -> value=-drive, 00:58:21.220 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:58:21.220 ==> default: -> value=-device, 00:58:21.220 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:58:21.220 ==> default: -> value=-drive, 00:58:21.220 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:58:21.220 ==> default: -> value=-device, 00:58:21.220 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:58:21.789 ==> default: Creating shared folders metadata... 00:58:21.789 ==> default: Starting domain. 00:58:23.167 ==> default: Waiting for domain to get an IP address... 00:58:41.255 ==> default: Waiting for SSH to become available... 00:58:41.255 ==> default: Configuring and enabling network interfaces... 00:58:45.434 default: SSH address: 192.168.121.159:22 00:58:45.434 default: SSH username: vagrant 00:58:45.434 default: SSH auth method: private key 00:58:47.962 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:58:56.068 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:59:02.632 ==> default: Mounting SSHFS shared folder... 00:59:04.536 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:59:04.536 ==> default: Checking Mount.. 00:59:06.438 ==> default: Folder Successfully Mounted! 00:59:06.438 ==> default: Running provisioner: file... 00:59:07.372 default: ~/.gitconfig => .gitconfig 00:59:07.937 00:59:07.937 SUCCESS! 00:59:07.937 00:59:07.937 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:59:07.937 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:59:07.937 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:59:07.937 00:59:07.945 [Pipeline] } 00:59:07.962 [Pipeline] // stage 00:59:07.970 [Pipeline] dir 00:59:07.971 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:59:07.972 [Pipeline] { 00:59:07.985 [Pipeline] catchError 00:59:07.987 [Pipeline] { 00:59:08.000 [Pipeline] sh 00:59:08.278 + vagrant ssh-config --host vagrant 00:59:08.279 + sed -ne /^Host/,$p 00:59:08.279 + tee ssh_conf 00:59:11.562 Host vagrant 00:59:11.562 HostName 192.168.121.159 00:59:11.562 User vagrant 00:59:11.562 Port 22 00:59:11.562 UserKnownHostsFile /dev/null 00:59:11.562 StrictHostKeyChecking no 00:59:11.562 PasswordAuthentication no 00:59:11.562 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:59:11.562 IdentitiesOnly yes 00:59:11.562 LogLevel FATAL 00:59:11.562 ForwardAgent yes 00:59:11.562 ForwardX11 yes 00:59:11.562 00:59:11.575 [Pipeline] withEnv 00:59:11.576 [Pipeline] { 00:59:11.585 [Pipeline] sh 00:59:11.860 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:59:11.860 source /etc/os-release 00:59:11.860 [[ -e /image.version ]] && img=$(< /image.version) 00:59:11.860 # Minimal, systemd-like check. 00:59:11.860 if [[ -e /.dockerenv ]]; then 00:59:11.860 # Clear garbage from the node's name: 00:59:11.860 # agt-er_autotest_547-896 -> autotest_547-896 00:59:11.860 # $HOSTNAME is the actual container id 00:59:11.860 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:59:11.860 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:59:11.860 # We can assume this is a mount from a host where container is running, 00:59:11.860 # so fetch its hostname to easily identify the target swarm worker. 00:59:11.860 container="$(< /etc/hostname) ($agent)" 00:59:11.860 else 00:59:11.860 # Fallback 00:59:11.860 container=$agent 00:59:11.860 fi 00:59:11.860 fi 00:59:11.860 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:59:11.860 00:59:12.130 [Pipeline] } 00:59:12.152 [Pipeline] // withEnv 00:59:12.162 [Pipeline] setCustomBuildProperty 00:59:12.177 [Pipeline] stage 00:59:12.179 [Pipeline] { (Tests) 00:59:12.198 [Pipeline] sh 00:59:12.529 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:59:12.801 [Pipeline] sh 00:59:13.081 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:59:13.353 [Pipeline] timeout 00:59:13.353 Timeout set to expire in 30 min 00:59:13.355 [Pipeline] { 00:59:13.370 [Pipeline] sh 00:59:13.655 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:59:14.221 HEAD is now at 8fb860b73 test/dd: check spdk_dd direct link to liburing 00:59:14.235 [Pipeline] sh 00:59:14.606 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:59:14.619 [Pipeline] sh 00:59:14.907 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:59:15.181 [Pipeline] sh 00:59:15.460 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:59:15.718 ++ readlink -f spdk_repo 00:59:15.718 + DIR_ROOT=/home/vagrant/spdk_repo 00:59:15.718 + [[ -n /home/vagrant/spdk_repo ]] 00:59:15.718 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:59:15.718 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:59:15.718 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:59:15.718 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:59:15.718 + [[ -d /home/vagrant/spdk_repo/output ]] 00:59:15.718 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:59:15.718 + cd /home/vagrant/spdk_repo 00:59:15.718 + source /etc/os-release 00:59:15.718 ++ NAME='Fedora Linux' 00:59:15.718 ++ VERSION='38 (Cloud Edition)' 00:59:15.718 ++ ID=fedora 00:59:15.718 ++ VERSION_ID=38 00:59:15.718 ++ VERSION_CODENAME= 00:59:15.718 ++ PLATFORM_ID=platform:f38 00:59:15.718 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:59:15.718 ++ ANSI_COLOR='0;38;2;60;110;180' 00:59:15.718 ++ LOGO=fedora-logo-icon 00:59:15.718 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:59:15.718 ++ HOME_URL=https://fedoraproject.org/ 00:59:15.718 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:59:15.718 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:59:15.718 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:59:15.718 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:59:15.718 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:59:15.718 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:59:15.718 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:59:15.718 ++ SUPPORT_END=2024-05-14 00:59:15.718 ++ VARIANT='Cloud Edition' 00:59:15.718 ++ VARIANT_ID=cloud 00:59:15.718 + uname -a 00:59:15.718 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:59:15.718 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:59:16.285 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:59:16.285 Hugepages 00:59:16.285 node hugesize free / total 00:59:16.285 node0 1048576kB 0 / 0 00:59:16.285 node0 2048kB 0 / 0 00:59:16.285 00:59:16.285 Type BDF Vendor Device NUMA Driver Device Block devices 00:59:16.285 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:59:16.285 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:59:16.285 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:59:16.285 + rm -f /tmp/spdk-ld-path 00:59:16.285 + source autorun-spdk.conf 00:59:16.285 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:59:16.285 ++ SPDK_TEST_NVMF=1 00:59:16.285 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:59:16.285 ++ SPDK_TEST_URING=1 00:59:16.285 ++ SPDK_TEST_USDT=1 00:59:16.285 ++ SPDK_RUN_UBSAN=1 00:59:16.285 ++ NET_TYPE=virt 00:59:16.285 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:59:16.285 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:59:16.285 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:59:16.285 ++ RUN_NIGHTLY=1 00:59:16.285 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:59:16.285 + [[ -n '' ]] 00:59:16.285 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:59:16.285 + for M in /var/spdk/build-*-manifest.txt 00:59:16.285 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:59:16.285 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:59:16.285 + for M in /var/spdk/build-*-manifest.txt 00:59:16.285 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:59:16.285 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:59:16.285 ++ uname 00:59:16.285 + [[ Linux == \L\i\n\u\x ]] 00:59:16.285 + sudo dmesg -T 00:59:16.285 + sudo dmesg --clear 00:59:16.285 + dmesg_pid=5850 00:59:16.285 + sudo dmesg -Tw 00:59:16.285 + [[ Fedora Linux == FreeBSD ]] 00:59:16.285 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:59:16.285 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:59:16.285 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:59:16.285 + [[ -x /usr/src/fio-static/fio ]] 00:59:16.285 + export FIO_BIN=/usr/src/fio-static/fio 00:59:16.285 + FIO_BIN=/usr/src/fio-static/fio 00:59:16.285 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:59:16.285 + [[ ! -v VFIO_QEMU_BIN ]] 00:59:16.285 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:59:16.285 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:59:16.285 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:59:16.285 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:59:16.285 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:59:16.285 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:59:16.285 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:59:16.544 Test configuration: 00:59:16.544 SPDK_RUN_FUNCTIONAL_TEST=1 00:59:16.544 SPDK_TEST_NVMF=1 00:59:16.544 SPDK_TEST_NVMF_TRANSPORT=tcp 00:59:16.544 SPDK_TEST_URING=1 00:59:16.544 SPDK_TEST_USDT=1 00:59:16.544 SPDK_RUN_UBSAN=1 00:59:16.544 NET_TYPE=virt 00:59:16.544 SPDK_TEST_NATIVE_DPDK=v23.11 00:59:16.544 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:59:16.544 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:59:16.544 RUN_NIGHTLY=1 10:56:21 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:59:16.544 10:56:21 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:59:16.544 10:56:21 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:59:16.544 10:56:21 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:59:16.544 10:56:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:16.544 10:56:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:16.544 10:56:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:16.544 10:56:21 -- paths/export.sh@5 -- $ export PATH 00:59:16.544 10:56:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:16.544 10:56:21 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:59:16.544 10:56:21 -- common/autobuild_common.sh@447 -- $ date +%s 00:59:16.544 10:56:21 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721645781.XXXXXX 00:59:16.544 10:56:21 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721645781.rVHbLN 00:59:16.544 10:56:21 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:59:16.544 10:56:21 -- common/autobuild_common.sh@453 -- $ '[' -n v23.11 ']' 00:59:16.544 10:56:21 -- common/autobuild_common.sh@454 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:59:16.544 10:56:21 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:59:16.544 10:56:21 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:59:16.544 10:56:21 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:59:16.545 10:56:21 -- common/autobuild_common.sh@463 -- $ get_config_params 00:59:16.545 10:56:21 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:59:16.545 10:56:21 -- common/autotest_common.sh@10 -- $ set +x 00:59:16.545 10:56:21 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:59:16.545 10:56:21 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:59:16.545 10:56:21 -- pm/common@17 -- $ local monitor 00:59:16.545 10:56:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:59:16.545 10:56:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:59:16.545 10:56:21 -- pm/common@25 -- $ sleep 1 00:59:16.545 10:56:21 -- pm/common@21 -- $ date +%s 00:59:16.545 10:56:21 -- pm/common@21 -- $ date +%s 00:59:16.545 10:56:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721645781 00:59:16.545 10:56:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721645781 00:59:16.545 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721645781_collect-vmstat.pm.log 00:59:16.545 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721645781_collect-cpu-load.pm.log 00:59:17.479 10:56:22 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:59:17.479 10:56:22 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:59:17.479 10:56:22 -- spdk/autobuild.sh@12 -- $ umask 022 00:59:17.479 10:56:22 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:59:17.479 10:56:22 -- spdk/autobuild.sh@16 -- $ date -u 00:59:17.479 Mon Jul 22 10:56:22 AM UTC 2024 00:59:17.479 10:56:22 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:59:17.737 v24.09-pre-259-g8fb860b73 00:59:17.737 10:56:22 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:59:17.737 10:56:22 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:59:17.737 10:56:22 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:59:17.737 10:56:22 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:59:17.737 10:56:22 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:59:17.737 10:56:22 -- common/autotest_common.sh@10 -- $ set +x 00:59:17.737 ************************************ 00:59:17.737 START TEST ubsan 00:59:17.737 ************************************ 00:59:17.737 using ubsan 00:59:17.737 10:56:22 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:59:17.737 00:59:17.737 real 0m0.000s 00:59:17.737 user 0m0.000s 00:59:17.737 sys 0m0.000s 00:59:17.737 10:56:22 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:59:17.737 ************************************ 00:59:17.737 END TEST ubsan 00:59:17.737 ************************************ 00:59:17.737 10:56:22 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:59:17.737 10:56:22 -- common/autotest_common.sh@1142 -- $ return 0 00:59:17.737 10:56:22 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:59:17.737 10:56:22 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:59:17.737 10:56:22 -- common/autobuild_common.sh@439 -- $ run_test build_native_dpdk _build_native_dpdk 00:59:17.737 10:56:22 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:59:17.737 10:56:22 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:59:17.737 10:56:22 -- common/autotest_common.sh@10 -- $ set +x 00:59:17.737 ************************************ 00:59:17.737 START TEST build_native_dpdk 00:59:17.737 ************************************ 00:59:17.737 10:56:22 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:59:17.737 10:56:22 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:59:17.737 10:56:22 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:59:17.737 10:56:22 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:59:17.737 10:56:22 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:59:17.737 10:56:22 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:59:17.737 10:56:22 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:59:17.737 10:56:22 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:59:17.737 10:56:22 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:59:17.737 10:56:22 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:59:17.737 10:56:22 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:59:17.738 eeb0605f11 version: 23.11.0 00:59:17.738 238778122a doc: update release notes for 23.11 00:59:17.738 46aa6b3cfc doc: fix description of RSS features 00:59:17.738 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:59:17.738 7e421ae345 devtools: support skipping forbid rule check 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:59:17.738 patching file config/rte_config.h 00:59:17.738 Hunk #1 succeeded at 60 (offset 1 line). 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:59:17.738 10:56:22 build_native_dpdk -- scripts/common.sh@365 -- $ return 0 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:59:17.738 patching file lib/pcapng/rte_pcapng.c 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:59:17.738 10:56:22 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:59:23.005 The Meson build system 00:59:23.005 Version: 1.3.1 00:59:23.005 Source dir: /home/vagrant/spdk_repo/dpdk 00:59:23.005 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:59:23.005 Build type: native build 00:59:23.005 Program cat found: YES (/usr/bin/cat) 00:59:23.005 Project name: DPDK 00:59:23.005 Project version: 23.11.0 00:59:23.005 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:59:23.005 C linker for the host machine: gcc ld.bfd 2.39-16 00:59:23.005 Host machine cpu family: x86_64 00:59:23.005 Host machine cpu: x86_64 00:59:23.005 Message: ## Building in Developer Mode ## 00:59:23.005 Program pkg-config found: YES (/usr/bin/pkg-config) 00:59:23.005 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:59:23.005 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:59:23.005 Program python3 found: YES (/usr/bin/python3) 00:59:23.005 Program cat found: YES (/usr/bin/cat) 00:59:23.005 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:59:23.005 Compiler for C supports arguments -march=native: YES 00:59:23.005 Checking for size of "void *" : 8 00:59:23.005 Checking for size of "void *" : 8 (cached) 00:59:23.005 Library m found: YES 00:59:23.005 Library numa found: YES 00:59:23.005 Has header "numaif.h" : YES 00:59:23.005 Library fdt found: NO 00:59:23.005 Library execinfo found: NO 00:59:23.005 Has header "execinfo.h" : YES 00:59:23.005 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:59:23.005 Run-time dependency libarchive found: NO (tried pkgconfig) 00:59:23.005 Run-time dependency libbsd found: NO (tried pkgconfig) 00:59:23.005 Run-time dependency jansson found: NO (tried pkgconfig) 00:59:23.005 Run-time dependency openssl found: YES 3.0.9 00:59:23.005 Run-time dependency libpcap found: YES 1.10.4 00:59:23.005 Has header "pcap.h" with dependency libpcap: YES 00:59:23.005 Compiler for C supports arguments -Wcast-qual: YES 00:59:23.006 Compiler for C supports arguments -Wdeprecated: YES 00:59:23.006 Compiler for C supports arguments -Wformat: YES 00:59:23.006 Compiler for C supports arguments -Wformat-nonliteral: NO 00:59:23.006 Compiler for C supports arguments -Wformat-security: NO 00:59:23.006 Compiler for C supports arguments -Wmissing-declarations: YES 00:59:23.006 Compiler for C supports arguments -Wmissing-prototypes: YES 00:59:23.006 Compiler for C supports arguments -Wnested-externs: YES 00:59:23.006 Compiler for C supports arguments -Wold-style-definition: YES 00:59:23.006 Compiler for C supports arguments -Wpointer-arith: YES 00:59:23.006 Compiler for C supports arguments -Wsign-compare: YES 00:59:23.006 Compiler for C supports arguments -Wstrict-prototypes: YES 00:59:23.006 Compiler for C supports arguments -Wundef: YES 00:59:23.006 Compiler for C supports arguments -Wwrite-strings: YES 00:59:23.006 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:59:23.006 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:59:23.006 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:59:23.006 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:59:23.006 Program objdump found: YES (/usr/bin/objdump) 00:59:23.006 Compiler for C supports arguments -mavx512f: YES 00:59:23.006 Checking if "AVX512 checking" compiles: YES 00:59:23.006 Fetching value of define "__SSE4_2__" : 1 00:59:23.006 Fetching value of define "__AES__" : 1 00:59:23.006 Fetching value of define "__AVX__" : 1 00:59:23.006 Fetching value of define "__AVX2__" : 1 00:59:23.006 Fetching value of define "__AVX512BW__" : 1 00:59:23.006 Fetching value of define "__AVX512CD__" : 1 00:59:23.006 Fetching value of define "__AVX512DQ__" : 1 00:59:23.006 Fetching value of define "__AVX512F__" : 1 00:59:23.006 Fetching value of define "__AVX512VL__" : 1 00:59:23.006 Fetching value of define "__PCLMUL__" : 1 00:59:23.006 Fetching value of define "__RDRND__" : 1 00:59:23.006 Fetching value of define "__RDSEED__" : 1 00:59:23.006 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:59:23.006 Fetching value of define "__znver1__" : (undefined) 00:59:23.006 Fetching value of define "__znver2__" : (undefined) 00:59:23.006 Fetching value of define "__znver3__" : (undefined) 00:59:23.006 Fetching value of define "__znver4__" : (undefined) 00:59:23.006 Compiler for C supports arguments -Wno-format-truncation: YES 00:59:23.006 Message: lib/log: Defining dependency "log" 00:59:23.006 Message: lib/kvargs: Defining dependency "kvargs" 00:59:23.006 Message: lib/telemetry: Defining dependency "telemetry" 00:59:23.006 Checking for function "getentropy" : NO 00:59:23.006 Message: lib/eal: Defining dependency "eal" 00:59:23.006 Message: lib/ring: Defining dependency "ring" 00:59:23.006 Message: lib/rcu: Defining dependency "rcu" 00:59:23.006 Message: lib/mempool: Defining dependency "mempool" 00:59:23.006 Message: lib/mbuf: Defining dependency "mbuf" 00:59:23.006 Fetching value of define "__PCLMUL__" : 1 (cached) 00:59:23.006 Fetching value of define "__AVX512F__" : 1 (cached) 00:59:23.006 Fetching value of define "__AVX512BW__" : 1 (cached) 00:59:23.006 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:59:23.006 Fetching value of define "__AVX512VL__" : 1 (cached) 00:59:23.006 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:59:23.006 Compiler for C supports arguments -mpclmul: YES 00:59:23.006 Compiler for C supports arguments -maes: YES 00:59:23.006 Compiler for C supports arguments -mavx512f: YES (cached) 00:59:23.006 Compiler for C supports arguments -mavx512bw: YES 00:59:23.006 Compiler for C supports arguments -mavx512dq: YES 00:59:23.006 Compiler for C supports arguments -mavx512vl: YES 00:59:23.006 Compiler for C supports arguments -mvpclmulqdq: YES 00:59:23.006 Compiler for C supports arguments -mavx2: YES 00:59:23.006 Compiler for C supports arguments -mavx: YES 00:59:23.006 Message: lib/net: Defining dependency "net" 00:59:23.006 Message: lib/meter: Defining dependency "meter" 00:59:23.006 Message: lib/ethdev: Defining dependency "ethdev" 00:59:23.006 Message: lib/pci: Defining dependency "pci" 00:59:23.006 Message: lib/cmdline: Defining dependency "cmdline" 00:59:23.006 Message: lib/metrics: Defining dependency "metrics" 00:59:23.006 Message: lib/hash: Defining dependency "hash" 00:59:23.006 Message: lib/timer: Defining dependency "timer" 00:59:23.006 Fetching value of define "__AVX512F__" : 1 (cached) 00:59:23.006 Fetching value of define "__AVX512VL__" : 1 (cached) 00:59:23.006 Fetching value of define "__AVX512CD__" : 1 (cached) 00:59:23.006 Fetching value of define "__AVX512BW__" : 1 (cached) 00:59:23.006 Message: lib/acl: Defining dependency "acl" 00:59:23.006 Message: lib/bbdev: Defining dependency "bbdev" 00:59:23.006 Message: lib/bitratestats: Defining dependency "bitratestats" 00:59:23.006 Run-time dependency libelf found: YES 0.190 00:59:23.006 Message: lib/bpf: Defining dependency "bpf" 00:59:23.006 Message: lib/cfgfile: Defining dependency "cfgfile" 00:59:23.006 Message: lib/compressdev: Defining dependency "compressdev" 00:59:23.006 Message: lib/cryptodev: Defining dependency "cryptodev" 00:59:23.006 Message: lib/distributor: Defining dependency "distributor" 00:59:23.006 Message: lib/dmadev: Defining dependency "dmadev" 00:59:23.006 Message: lib/efd: Defining dependency "efd" 00:59:23.006 Message: lib/eventdev: Defining dependency "eventdev" 00:59:23.006 Message: lib/dispatcher: Defining dependency "dispatcher" 00:59:23.006 Message: lib/gpudev: Defining dependency "gpudev" 00:59:23.006 Message: lib/gro: Defining dependency "gro" 00:59:23.006 Message: lib/gso: Defining dependency "gso" 00:59:23.006 Message: lib/ip_frag: Defining dependency "ip_frag" 00:59:23.006 Message: lib/jobstats: Defining dependency "jobstats" 00:59:23.006 Message: lib/latencystats: Defining dependency "latencystats" 00:59:23.006 Message: lib/lpm: Defining dependency "lpm" 00:59:23.006 Fetching value of define "__AVX512F__" : 1 (cached) 00:59:23.006 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:59:23.006 Fetching value of define "__AVX512IFMA__" : (undefined) 00:59:23.006 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:59:23.006 Message: lib/member: Defining dependency "member" 00:59:23.006 Message: lib/pcapng: Defining dependency "pcapng" 00:59:23.006 Compiler for C supports arguments -Wno-cast-qual: YES 00:59:23.006 Message: lib/power: Defining dependency "power" 00:59:23.006 Message: lib/rawdev: Defining dependency "rawdev" 00:59:23.006 Message: lib/regexdev: Defining dependency "regexdev" 00:59:23.006 Message: lib/mldev: Defining dependency "mldev" 00:59:23.006 Message: lib/rib: Defining dependency "rib" 00:59:23.006 Message: lib/reorder: Defining dependency "reorder" 00:59:23.006 Message: lib/sched: Defining dependency "sched" 00:59:23.006 Message: lib/security: Defining dependency "security" 00:59:23.006 Message: lib/stack: Defining dependency "stack" 00:59:23.006 Has header "linux/userfaultfd.h" : YES 00:59:23.006 Has header "linux/vduse.h" : YES 00:59:23.006 Message: lib/vhost: Defining dependency "vhost" 00:59:23.006 Message: lib/ipsec: Defining dependency "ipsec" 00:59:23.006 Message: lib/pdcp: Defining dependency "pdcp" 00:59:23.006 Fetching value of define "__AVX512F__" : 1 (cached) 00:59:23.006 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:59:23.006 Fetching value of define "__AVX512BW__" : 1 (cached) 00:59:23.006 Message: lib/fib: Defining dependency "fib" 00:59:23.006 Message: lib/port: Defining dependency "port" 00:59:23.006 Message: lib/pdump: Defining dependency "pdump" 00:59:23.006 Message: lib/table: Defining dependency "table" 00:59:23.006 Message: lib/pipeline: Defining dependency "pipeline" 00:59:23.006 Message: lib/graph: Defining dependency "graph" 00:59:23.006 Message: lib/node: Defining dependency "node" 00:59:23.006 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:59:23.006 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:59:23.006 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:59:24.910 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:59:24.910 Compiler for C supports arguments -Wno-sign-compare: YES 00:59:24.910 Compiler for C supports arguments -Wno-unused-value: YES 00:59:24.910 Compiler for C supports arguments -Wno-format: YES 00:59:24.910 Compiler for C supports arguments -Wno-format-security: YES 00:59:24.910 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:59:24.910 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:59:24.910 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:59:24.910 Compiler for C supports arguments -Wno-unused-parameter: YES 00:59:24.910 Fetching value of define "__AVX512F__" : 1 (cached) 00:59:24.910 Fetching value of define "__AVX512BW__" : 1 (cached) 00:59:24.910 Compiler for C supports arguments -mavx512f: YES (cached) 00:59:24.910 Compiler for C supports arguments -mavx512bw: YES (cached) 00:59:24.910 Compiler for C supports arguments -march=skylake-avx512: YES 00:59:24.910 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:59:24.910 Has header "sys/epoll.h" : YES 00:59:24.910 Program doxygen found: YES (/usr/bin/doxygen) 00:59:24.910 Configuring doxy-api-html.conf using configuration 00:59:24.910 Configuring doxy-api-man.conf using configuration 00:59:24.910 Program mandb found: YES (/usr/bin/mandb) 00:59:24.910 Program sphinx-build found: NO 00:59:24.910 Configuring rte_build_config.h using configuration 00:59:24.910 Message: 00:59:24.910 ================= 00:59:24.910 Applications Enabled 00:59:24.910 ================= 00:59:24.910 00:59:24.910 apps: 00:59:24.910 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:59:24.910 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:59:24.910 test-pmd, test-regex, test-sad, test-security-perf, 00:59:24.910 00:59:24.910 Message: 00:59:24.910 ================= 00:59:24.910 Libraries Enabled 00:59:24.910 ================= 00:59:24.910 00:59:24.910 libs: 00:59:24.910 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:59:24.910 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:59:24.910 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:59:24.911 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:59:24.911 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:59:24.911 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:59:24.911 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:59:24.911 00:59:24.911 00:59:24.911 Message: 00:59:24.911 =============== 00:59:24.911 Drivers Enabled 00:59:24.911 =============== 00:59:24.911 00:59:24.911 common: 00:59:24.911 00:59:24.911 bus: 00:59:24.911 pci, vdev, 00:59:24.911 mempool: 00:59:24.911 ring, 00:59:24.911 dma: 00:59:24.911 00:59:24.911 net: 00:59:24.911 i40e, 00:59:24.911 raw: 00:59:24.911 00:59:24.911 crypto: 00:59:24.911 00:59:24.911 compress: 00:59:24.911 00:59:24.911 regex: 00:59:24.911 00:59:24.911 ml: 00:59:24.911 00:59:24.911 vdpa: 00:59:24.911 00:59:24.911 event: 00:59:24.911 00:59:24.911 baseband: 00:59:24.911 00:59:24.911 gpu: 00:59:24.911 00:59:24.911 00:59:24.911 Message: 00:59:24.911 ================= 00:59:24.911 Content Skipped 00:59:24.911 ================= 00:59:24.911 00:59:24.911 apps: 00:59:24.911 00:59:24.911 libs: 00:59:24.911 00:59:24.911 drivers: 00:59:24.911 common/cpt: not in enabled drivers build config 00:59:24.911 common/dpaax: not in enabled drivers build config 00:59:24.911 common/iavf: not in enabled drivers build config 00:59:24.911 common/idpf: not in enabled drivers build config 00:59:24.911 common/mvep: not in enabled drivers build config 00:59:24.911 common/octeontx: not in enabled drivers build config 00:59:24.911 bus/auxiliary: not in enabled drivers build config 00:59:24.911 bus/cdx: not in enabled drivers build config 00:59:24.911 bus/dpaa: not in enabled drivers build config 00:59:24.911 bus/fslmc: not in enabled drivers build config 00:59:24.911 bus/ifpga: not in enabled drivers build config 00:59:24.911 bus/platform: not in enabled drivers build config 00:59:24.911 bus/vmbus: not in enabled drivers build config 00:59:24.911 common/cnxk: not in enabled drivers build config 00:59:24.911 common/mlx5: not in enabled drivers build config 00:59:24.911 common/nfp: not in enabled drivers build config 00:59:24.911 common/qat: not in enabled drivers build config 00:59:24.911 common/sfc_efx: not in enabled drivers build config 00:59:24.911 mempool/bucket: not in enabled drivers build config 00:59:24.911 mempool/cnxk: not in enabled drivers build config 00:59:24.911 mempool/dpaa: not in enabled drivers build config 00:59:24.911 mempool/dpaa2: not in enabled drivers build config 00:59:24.911 mempool/octeontx: not in enabled drivers build config 00:59:24.911 mempool/stack: not in enabled drivers build config 00:59:24.911 dma/cnxk: not in enabled drivers build config 00:59:24.911 dma/dpaa: not in enabled drivers build config 00:59:24.911 dma/dpaa2: not in enabled drivers build config 00:59:24.911 dma/hisilicon: not in enabled drivers build config 00:59:24.911 dma/idxd: not in enabled drivers build config 00:59:24.911 dma/ioat: not in enabled drivers build config 00:59:24.911 dma/skeleton: not in enabled drivers build config 00:59:24.911 net/af_packet: not in enabled drivers build config 00:59:24.911 net/af_xdp: not in enabled drivers build config 00:59:24.911 net/ark: not in enabled drivers build config 00:59:24.911 net/atlantic: not in enabled drivers build config 00:59:24.911 net/avp: not in enabled drivers build config 00:59:24.911 net/axgbe: not in enabled drivers build config 00:59:24.911 net/bnx2x: not in enabled drivers build config 00:59:24.911 net/bnxt: not in enabled drivers build config 00:59:24.911 net/bonding: not in enabled drivers build config 00:59:24.911 net/cnxk: not in enabled drivers build config 00:59:24.911 net/cpfl: not in enabled drivers build config 00:59:24.911 net/cxgbe: not in enabled drivers build config 00:59:24.911 net/dpaa: not in enabled drivers build config 00:59:24.911 net/dpaa2: not in enabled drivers build config 00:59:24.911 net/e1000: not in enabled drivers build config 00:59:24.911 net/ena: not in enabled drivers build config 00:59:24.911 net/enetc: not in enabled drivers build config 00:59:24.911 net/enetfec: not in enabled drivers build config 00:59:24.911 net/enic: not in enabled drivers build config 00:59:24.911 net/failsafe: not in enabled drivers build config 00:59:24.911 net/fm10k: not in enabled drivers build config 00:59:24.911 net/gve: not in enabled drivers build config 00:59:24.911 net/hinic: not in enabled drivers build config 00:59:24.911 net/hns3: not in enabled drivers build config 00:59:24.911 net/iavf: not in enabled drivers build config 00:59:24.911 net/ice: not in enabled drivers build config 00:59:24.911 net/idpf: not in enabled drivers build config 00:59:24.911 net/igc: not in enabled drivers build config 00:59:24.911 net/ionic: not in enabled drivers build config 00:59:24.911 net/ipn3ke: not in enabled drivers build config 00:59:24.911 net/ixgbe: not in enabled drivers build config 00:59:24.911 net/mana: not in enabled drivers build config 00:59:24.911 net/memif: not in enabled drivers build config 00:59:24.911 net/mlx4: not in enabled drivers build config 00:59:24.911 net/mlx5: not in enabled drivers build config 00:59:24.911 net/mvneta: not in enabled drivers build config 00:59:24.911 net/mvpp2: not in enabled drivers build config 00:59:24.911 net/netvsc: not in enabled drivers build config 00:59:24.911 net/nfb: not in enabled drivers build config 00:59:24.911 net/nfp: not in enabled drivers build config 00:59:24.911 net/ngbe: not in enabled drivers build config 00:59:24.911 net/null: not in enabled drivers build config 00:59:24.911 net/octeontx: not in enabled drivers build config 00:59:24.911 net/octeon_ep: not in enabled drivers build config 00:59:24.911 net/pcap: not in enabled drivers build config 00:59:24.911 net/pfe: not in enabled drivers build config 00:59:24.911 net/qede: not in enabled drivers build config 00:59:24.911 net/ring: not in enabled drivers build config 00:59:24.911 net/sfc: not in enabled drivers build config 00:59:24.911 net/softnic: not in enabled drivers build config 00:59:24.911 net/tap: not in enabled drivers build config 00:59:24.911 net/thunderx: not in enabled drivers build config 00:59:24.911 net/txgbe: not in enabled drivers build config 00:59:24.911 net/vdev_netvsc: not in enabled drivers build config 00:59:24.911 net/vhost: not in enabled drivers build config 00:59:24.911 net/virtio: not in enabled drivers build config 00:59:24.911 net/vmxnet3: not in enabled drivers build config 00:59:24.911 raw/cnxk_bphy: not in enabled drivers build config 00:59:24.911 raw/cnxk_gpio: not in enabled drivers build config 00:59:24.911 raw/dpaa2_cmdif: not in enabled drivers build config 00:59:24.911 raw/ifpga: not in enabled drivers build config 00:59:24.911 raw/ntb: not in enabled drivers build config 00:59:24.911 raw/skeleton: not in enabled drivers build config 00:59:24.911 crypto/armv8: not in enabled drivers build config 00:59:24.911 crypto/bcmfs: not in enabled drivers build config 00:59:24.911 crypto/caam_jr: not in enabled drivers build config 00:59:24.911 crypto/ccp: not in enabled drivers build config 00:59:24.911 crypto/cnxk: not in enabled drivers build config 00:59:24.911 crypto/dpaa_sec: not in enabled drivers build config 00:59:24.911 crypto/dpaa2_sec: not in enabled drivers build config 00:59:24.911 crypto/ipsec_mb: not in enabled drivers build config 00:59:24.911 crypto/mlx5: not in enabled drivers build config 00:59:24.911 crypto/mvsam: not in enabled drivers build config 00:59:24.911 crypto/nitrox: not in enabled drivers build config 00:59:24.911 crypto/null: not in enabled drivers build config 00:59:24.911 crypto/octeontx: not in enabled drivers build config 00:59:24.911 crypto/openssl: not in enabled drivers build config 00:59:24.911 crypto/scheduler: not in enabled drivers build config 00:59:24.911 crypto/uadk: not in enabled drivers build config 00:59:24.911 crypto/virtio: not in enabled drivers build config 00:59:24.911 compress/isal: not in enabled drivers build config 00:59:24.911 compress/mlx5: not in enabled drivers build config 00:59:24.911 compress/octeontx: not in enabled drivers build config 00:59:24.911 compress/zlib: not in enabled drivers build config 00:59:24.911 regex/mlx5: not in enabled drivers build config 00:59:24.911 regex/cn9k: not in enabled drivers build config 00:59:24.911 ml/cnxk: not in enabled drivers build config 00:59:24.911 vdpa/ifc: not in enabled drivers build config 00:59:24.911 vdpa/mlx5: not in enabled drivers build config 00:59:24.911 vdpa/nfp: not in enabled drivers build config 00:59:24.911 vdpa/sfc: not in enabled drivers build config 00:59:24.911 event/cnxk: not in enabled drivers build config 00:59:24.911 event/dlb2: not in enabled drivers build config 00:59:24.911 event/dpaa: not in enabled drivers build config 00:59:24.911 event/dpaa2: not in enabled drivers build config 00:59:24.911 event/dsw: not in enabled drivers build config 00:59:24.911 event/opdl: not in enabled drivers build config 00:59:24.911 event/skeleton: not in enabled drivers build config 00:59:24.911 event/sw: not in enabled drivers build config 00:59:24.911 event/octeontx: not in enabled drivers build config 00:59:24.911 baseband/acc: not in enabled drivers build config 00:59:24.911 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:59:24.911 baseband/fpga_lte_fec: not in enabled drivers build config 00:59:24.911 baseband/la12xx: not in enabled drivers build config 00:59:24.911 baseband/null: not in enabled drivers build config 00:59:24.911 baseband/turbo_sw: not in enabled drivers build config 00:59:24.911 gpu/cuda: not in enabled drivers build config 00:59:24.911 00:59:24.911 00:59:24.911 Build targets in project: 217 00:59:24.911 00:59:24.911 DPDK 23.11.0 00:59:24.911 00:59:24.911 User defined options 00:59:24.911 libdir : lib 00:59:24.911 prefix : /home/vagrant/spdk_repo/dpdk/build 00:59:24.911 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:59:24.911 c_link_args : 00:59:24.911 enable_docs : false 00:59:24.911 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:59:24.911 enable_kmods : false 00:59:24.911 machine : native 00:59:24.911 tests : false 00:59:24.911 00:59:24.911 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:59:24.911 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:59:24.911 10:56:29 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:59:24.911 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:59:24.911 [1/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:59:24.911 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:59:24.911 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:59:24.911 [4/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:59:24.911 [5/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:59:24.911 [6/707] Linking static target lib/librte_kvargs.a 00:59:24.911 [7/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:59:24.911 [8/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:59:24.911 [9/707] Linking static target lib/librte_log.a 00:59:24.911 [10/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:59:25.168 [11/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:59:25.168 [12/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:59:25.168 [13/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:59:25.168 [14/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:59:25.168 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:59:25.168 [16/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:59:25.425 [17/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:59:25.426 [18/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:59:25.426 [19/707] Linking target lib/librte_log.so.24.0 00:59:25.426 [20/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:59:25.426 [21/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:59:25.426 [22/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:59:25.426 [23/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:59:25.426 [24/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:59:25.684 [25/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:59:25.684 [26/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:59:25.684 [27/707] Linking static target lib/librte_telemetry.a 00:59:25.684 [28/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:59:25.684 [29/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:59:25.684 [30/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:59:25.684 [31/707] Linking target lib/librte_kvargs.so.24.0 00:59:25.684 [32/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:59:25.684 [33/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:59:25.941 [34/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:59:25.941 [35/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:59:25.941 [36/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:59:25.941 [37/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:59:25.941 [38/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:59:25.941 [39/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:59:25.941 [40/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:59:25.941 [41/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:59:26.199 [42/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:59:26.199 [43/707] Linking target lib/librte_telemetry.so.24.0 00:59:26.199 [44/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:59:26.199 [45/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:59:26.199 [46/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:59:26.199 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:59:26.457 [48/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:59:26.457 [49/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:59:26.457 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:59:26.457 [51/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:59:26.457 [52/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:59:26.457 [53/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:59:26.457 [54/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:59:26.457 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:59:26.715 [56/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:59:26.715 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:59:26.715 [58/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:59:26.715 [59/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:59:26.715 [60/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:59:26.715 [61/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:59:26.715 [62/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:59:26.715 [63/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:59:26.715 [64/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:59:26.715 [65/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:59:26.973 [66/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:59:26.973 [67/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:59:26.973 [68/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:59:26.973 [69/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:59:26.973 [70/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:59:26.973 [71/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:59:26.973 [72/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:59:26.973 [73/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:59:27.231 [74/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:59:27.231 [75/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:59:27.231 [76/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:59:27.231 [77/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:59:27.231 [78/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:59:27.231 [79/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:59:27.489 [80/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:59:27.489 [81/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:59:27.489 [82/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:59:27.489 [83/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:59:27.489 [84/707] Linking static target lib/librte_ring.a 00:59:27.489 [85/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:59:27.747 [86/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:59:27.747 [87/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:59:27.747 [88/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:59:27.747 [89/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:59:27.747 [90/707] Linking static target lib/librte_eal.a 00:59:27.747 [91/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:59:27.747 [92/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:59:27.747 [93/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:59:28.004 [94/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:59:28.004 [95/707] Linking static target lib/librte_mempool.a 00:59:28.004 [96/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:59:28.004 [97/707] Linking static target lib/librte_rcu.a 00:59:28.004 [98/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:59:28.261 [99/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:59:28.261 [100/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:59:28.261 [101/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:59:28.261 [102/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:59:28.261 [103/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:59:28.261 [104/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:59:28.261 [105/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:59:28.518 [106/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:59:28.518 [107/707] Linking static target lib/librte_net.a 00:59:28.518 [108/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:59:28.518 [109/707] Linking static target lib/librte_mbuf.a 00:59:28.518 [110/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:59:28.518 [111/707] Linking static target lib/librte_meter.a 00:59:28.518 [112/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:59:28.518 [113/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:59:28.775 [114/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:59:28.775 [115/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:59:28.775 [116/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:59:28.775 [117/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:59:28.775 [118/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:59:29.033 [119/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:59:29.033 [120/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:59:29.033 [121/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:59:29.599 [122/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:59:29.599 [123/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:59:29.599 [124/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:59:29.599 [125/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:59:29.599 [126/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:59:29.599 [127/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:59:29.599 [128/707] Linking static target lib/librte_pci.a 00:59:29.599 [129/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:59:29.599 [130/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:59:29.599 [131/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:59:29.599 [132/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:59:29.867 [133/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:59:29.867 [134/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:59:29.867 [135/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:59:29.867 [136/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:59:29.867 [137/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:59:29.867 [138/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:59:29.867 [139/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:59:29.867 [140/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:59:29.867 [141/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:59:30.125 [142/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:59:30.125 [143/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:59:30.125 [144/707] Linking static target lib/librte_cmdline.a 00:59:30.125 [145/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:59:30.125 [146/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:59:30.383 [147/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:59:30.383 [148/707] Linking static target lib/librte_metrics.a 00:59:30.383 [149/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:59:30.383 [150/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:59:30.641 [151/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:59:30.641 [152/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:59:30.641 [153/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:59:30.641 [154/707] Linking static target lib/librte_timer.a 00:59:30.898 [155/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:59:30.898 [156/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:59:31.157 [157/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:59:31.157 [158/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:59:31.157 [159/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:59:31.157 [160/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:59:31.721 [161/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:59:31.721 [162/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:59:31.721 [163/707] Linking static target lib/librte_bitratestats.a 00:59:31.721 [164/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:59:31.721 [165/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:59:31.721 [166/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:59:31.721 [167/707] Linking static target lib/librte_bbdev.a 00:59:31.978 [168/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:59:31.978 [169/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:59:31.978 [170/707] Linking static target lib/librte_hash.a 00:59:32.235 [171/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:59:32.235 [172/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:59:32.235 [173/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:59:32.235 [174/707] Linking static target lib/acl/libavx2_tmp.a 00:59:32.492 [175/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:59:32.492 [176/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:59:32.492 [177/707] Linking static target lib/librte_ethdev.a 00:59:32.492 [178/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:59:32.492 [179/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:59:32.751 [180/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:59:32.751 [181/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:59:32.751 [182/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:59:32.751 [183/707] Linking static target lib/librte_cfgfile.a 00:59:32.751 [184/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:59:33.009 [185/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:59:33.009 [186/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:59:33.009 [187/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:59:33.009 [188/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:59:33.268 [189/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:59:33.268 [190/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:59:33.268 [191/707] Linking static target lib/librte_bpf.a 00:59:33.268 [192/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:59:33.268 [193/707] Linking static target lib/librte_compressdev.a 00:59:33.268 [194/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:59:33.532 [195/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:59:33.532 [196/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:59:33.532 [197/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:59:33.790 [198/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:59:33.790 [199/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:59:33.790 [200/707] Linking static target lib/librte_acl.a 00:59:33.790 [201/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:59:33.790 [202/707] Linking static target lib/librte_distributor.a 00:59:33.790 [203/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:59:34.048 [204/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:59:34.048 [205/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:59:34.048 [206/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:59:34.048 [207/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:59:34.048 [208/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:59:34.048 [209/707] Linking target lib/librte_eal.so.24.0 00:59:34.307 [210/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:59:34.307 [211/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:59:34.307 [212/707] Linking static target lib/librte_dmadev.a 00:59:34.307 [213/707] Linking target lib/librte_ring.so.24.0 00:59:34.307 [214/707] Linking target lib/librte_meter.so.24.0 00:59:34.307 [215/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:59:34.307 [216/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:59:34.568 [217/707] Linking target lib/librte_rcu.so.24.0 00:59:34.568 [218/707] Linking target lib/librte_mempool.so.24.0 00:59:34.568 [219/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:59:34.568 [220/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:59:34.568 [221/707] Linking target lib/librte_pci.so.24.0 00:59:34.568 [222/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:59:34.568 [223/707] Linking target lib/librte_timer.so.24.0 00:59:34.568 [224/707] Linking target lib/librte_mbuf.so.24.0 00:59:34.568 [225/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:59:34.827 [226/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:59:34.827 [227/707] Linking target lib/librte_acl.so.24.0 00:59:34.827 [228/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:59:34.827 [229/707] Linking target lib/librte_cfgfile.so.24.0 00:59:34.827 [230/707] Linking target lib/librte_net.so.24.0 00:59:34.827 [231/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:59:34.827 [232/707] Linking target lib/librte_bbdev.so.24.0 00:59:34.827 [233/707] Linking target lib/librte_compressdev.so.24.0 00:59:34.827 [234/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:59:34.827 [235/707] Linking target lib/librte_distributor.so.24.0 00:59:34.827 [236/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:59:34.827 [237/707] Linking static target lib/librte_efd.a 00:59:34.827 [238/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:59:34.827 [239/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:59:34.827 [240/707] Linking target lib/librte_cmdline.so.24.0 00:59:34.827 [241/707] Linking static target lib/librte_cryptodev.a 00:59:35.087 [242/707] Linking target lib/librte_hash.so.24.0 00:59:35.087 [243/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:59:35.087 [244/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:59:35.087 [245/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:59:35.087 [246/707] Linking target lib/librte_efd.so.24.0 00:59:35.087 [247/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:59:35.348 [248/707] Linking target lib/librte_dmadev.so.24.0 00:59:35.348 [249/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:59:35.348 [250/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:59:35.348 [251/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:59:35.610 [252/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:59:35.610 [253/707] Linking static target lib/librte_dispatcher.a 00:59:35.874 [254/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:59:35.874 [255/707] Linking static target lib/librte_gpudev.a 00:59:35.874 [256/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:59:35.874 [257/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:59:35.874 [258/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:59:35.874 [259/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:59:36.139 [260/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:59:36.139 [261/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:59:36.399 [262/707] Linking target lib/librte_cryptodev.so.24.0 00:59:36.399 [263/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:59:36.399 [264/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:59:36.399 [265/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:59:36.399 [266/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:59:36.399 [267/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:59:36.399 [268/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:59:36.399 [269/707] Linking static target lib/librte_eventdev.a 00:59:36.399 [270/707] Linking static target lib/librte_gro.a 00:59:36.399 [271/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:59:36.658 [272/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:59:36.658 [273/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:59:36.658 [274/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:59:36.658 [275/707] Linking target lib/librte_gpudev.so.24.0 00:59:36.658 [276/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:59:36.917 [277/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:59:36.917 [278/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:59:36.917 [279/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:59:36.917 [280/707] Linking static target lib/librte_gso.a 00:59:36.917 [281/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:59:37.176 [282/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:59:37.176 [283/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:59:37.176 [284/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:59:37.176 [285/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:59:37.176 [286/707] Linking static target lib/librte_jobstats.a 00:59:37.176 [287/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:59:37.176 [288/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:59:37.435 [289/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:59:37.435 [290/707] Linking static target lib/librte_ip_frag.a 00:59:37.435 [291/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:59:37.435 [292/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:59:37.435 [293/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:59:37.435 [294/707] Linking static target lib/librte_latencystats.a 00:59:37.435 [295/707] Linking target lib/librte_ethdev.so.24.0 00:59:37.435 [296/707] Linking target lib/librte_jobstats.so.24.0 00:59:37.693 [297/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:59:37.693 [298/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:59:37.693 [299/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:59:37.693 [300/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:59:37.693 [301/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:59:37.693 [302/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:59:37.693 [303/707] Linking target lib/librte_metrics.so.24.0 00:59:37.693 [304/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:59:37.693 [305/707] Linking target lib/librte_bpf.so.24.0 00:59:37.693 [306/707] Linking target lib/librte_gro.so.24.0 00:59:37.693 [307/707] Linking target lib/librte_gso.so.24.0 00:59:37.693 [308/707] Linking target lib/librte_ip_frag.so.24.0 00:59:37.693 [309/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:59:37.693 [310/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:59:37.955 [311/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:59:37.955 [312/707] Linking target lib/librte_latencystats.so.24.0 00:59:37.955 [313/707] Linking target lib/librte_bitratestats.so.24.0 00:59:37.955 [314/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:59:37.955 [315/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:59:37.955 [316/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:59:37.955 [317/707] Linking static target lib/librte_lpm.a 00:59:37.956 [318/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:59:38.215 [319/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:59:38.215 [320/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:59:38.215 [321/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:59:38.215 [322/707] Linking static target lib/librte_pcapng.a 00:59:38.215 [323/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:59:38.215 [324/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:59:38.473 [325/707] Linking target lib/librte_lpm.so.24.0 00:59:38.473 [326/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:59:38.473 [327/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:59:38.473 [328/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:59:38.473 [329/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:59:38.473 [330/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:59:38.473 [331/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:59:38.473 [332/707] Linking target lib/librte_pcapng.so.24.0 00:59:38.730 [333/707] Linking target lib/librte_eventdev.so.24.0 00:59:38.730 [334/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:59:38.730 [335/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:59:38.730 [336/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:59:38.730 [337/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:59:38.730 [338/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:59:38.730 [339/707] Linking target lib/librte_dispatcher.so.24.0 00:59:38.988 [340/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:59:38.988 [341/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:59:38.988 [342/707] Linking static target lib/librte_power.a 00:59:38.988 [343/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:59:38.988 [344/707] Linking static target lib/librte_rawdev.a 00:59:38.988 [345/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:59:38.988 [346/707] Linking static target lib/librte_regexdev.a 00:59:38.988 [347/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:59:39.246 [348/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:59:39.246 [349/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:59:39.246 [350/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:59:39.246 [351/707] Linking static target lib/librte_mldev.a 00:59:39.246 [352/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:59:39.246 [353/707] Linking static target lib/librte_member.a 00:59:39.246 [354/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:59:39.503 [355/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:59:39.503 [356/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:59:39.503 [357/707] Linking target lib/librte_rawdev.so.24.0 00:59:39.503 [358/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:59:39.503 [359/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:59:39.503 [360/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:59:39.503 [361/707] Linking target lib/librte_member.so.24.0 00:59:39.503 [362/707] Linking target lib/librte_power.so.24.0 00:59:39.503 [363/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:59:39.503 [364/707] Linking static target lib/librte_reorder.a 00:59:39.761 [365/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:59:39.761 [366/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:59:39.761 [367/707] Linking static target lib/librte_rib.a 00:59:39.761 [368/707] Linking target lib/librte_regexdev.so.24.0 00:59:39.761 [369/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:59:39.761 [370/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:59:39.761 [371/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:59:39.761 [372/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:59:39.761 [373/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:59:40.030 [374/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:59:40.030 [375/707] Linking target lib/librte_reorder.so.24.0 00:59:40.030 [376/707] Linking static target lib/librte_stack.a 00:59:40.030 [377/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:59:40.030 [378/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:59:40.030 [379/707] Linking static target lib/librte_security.a 00:59:40.030 [380/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:59:40.030 [381/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:59:40.030 [382/707] Linking target lib/librte_stack.so.24.0 00:59:40.289 [383/707] Linking target lib/librte_rib.so.24.0 00:59:40.289 [384/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:59:40.289 [385/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:59:40.289 [386/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:59:40.289 [387/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:59:40.548 [388/707] Linking target lib/librte_mldev.so.24.0 00:59:40.548 [389/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:59:40.548 [390/707] Linking target lib/librte_security.so.24.0 00:59:40.548 [391/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:59:40.548 [392/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:59:40.548 [393/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:59:40.548 [394/707] Linking static target lib/librte_sched.a 00:59:40.805 [395/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:59:40.805 [396/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:59:41.062 [397/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:59:41.062 [398/707] Linking target lib/librte_sched.so.24.0 00:59:41.062 [399/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:59:41.062 [400/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:59:41.062 [401/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:59:41.320 [402/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:59:41.320 [403/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:59:41.320 [404/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:59:41.581 [405/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:59:41.581 [406/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:59:41.581 [407/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:59:41.842 [408/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:59:41.842 [409/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:59:41.842 [410/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:59:41.842 [411/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:59:41.842 [412/707] Linking static target lib/librte_ipsec.a 00:59:41.842 [413/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:59:42.101 [414/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:59:42.101 [415/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:59:42.101 [416/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:59:42.101 [417/707] Linking target lib/librte_ipsec.so.24.0 00:59:42.360 [418/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:59:42.360 [419/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:59:42.360 [420/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:59:42.618 [421/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:59:42.618 [422/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:59:42.618 [423/707] Linking static target lib/librte_fib.a 00:59:42.619 [424/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:59:42.619 [425/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:59:42.877 [426/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:59:42.877 [427/707] Linking static target lib/librte_pdcp.a 00:59:42.877 [428/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:59:42.877 [429/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:59:42.877 [430/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:59:42.877 [431/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:59:43.136 [432/707] Linking target lib/librte_fib.so.24.0 00:59:43.136 [433/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:59:43.136 [434/707] Linking target lib/librte_pdcp.so.24.0 00:59:43.395 [435/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:59:43.395 [436/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:59:43.655 [437/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:59:43.655 [438/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:59:43.655 [439/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:59:43.655 [440/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:59:43.913 [441/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:59:43.913 [442/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:59:44.172 [443/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:59:44.172 [444/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:59:44.172 [445/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:59:44.172 [446/707] Linking static target lib/librte_port.a 00:59:44.172 [447/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:59:44.172 [448/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:59:44.172 [449/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:59:44.467 [450/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:59:44.467 [451/707] Linking static target lib/librte_pdump.a 00:59:44.467 [452/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:59:44.467 [453/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:59:44.467 [454/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:59:44.467 [455/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:59:44.740 [456/707] Linking target lib/librte_pdump.so.24.0 00:59:44.740 [457/707] Linking target lib/librte_port.so.24.0 00:59:44.740 [458/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:59:44.740 [459/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:59:44.999 [460/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:59:44.999 [461/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:59:44.999 [462/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:59:44.999 [463/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:59:44.999 [464/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:59:45.257 [465/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:59:45.257 [466/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:59:45.257 [467/707] Linking static target lib/librte_table.a 00:59:45.257 [468/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:59:45.514 [469/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:59:45.514 [470/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:59:45.771 [471/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:59:46.028 [472/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:59:46.028 [473/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:59:46.028 [474/707] Linking target lib/librte_table.so.24.0 00:59:46.028 [475/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:59:46.028 [476/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:59:46.285 [477/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:59:46.285 [478/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:59:46.542 [479/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:59:46.542 [480/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:59:46.542 [481/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:59:46.542 [482/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:59:46.542 [483/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:59:46.797 [484/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:59:46.797 [485/707] Linking static target lib/librte_graph.a 00:59:46.797 [486/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:59:47.053 [487/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:59:47.053 [488/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:59:47.053 [489/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:59:47.053 [490/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:59:47.322 [491/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:59:47.322 [492/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:59:47.578 [493/707] Linking target lib/librte_graph.so.24.0 00:59:47.578 [494/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:59:47.578 [495/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:59:47.578 [496/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:59:47.856 [497/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:59:47.856 [498/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:59:47.856 [499/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:59:47.856 [500/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:59:47.856 [501/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:59:48.112 [502/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:59:48.112 [503/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:59:48.112 [504/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:59:48.368 [505/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:59:48.368 [506/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:59:48.369 [507/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:59:48.369 [508/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:59:48.369 [509/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:59:48.369 [510/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:59:48.625 [511/707] Linking static target lib/librte_node.a 00:59:48.625 [512/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:59:48.625 [513/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:59:48.625 [514/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:59:48.881 [515/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:59:48.881 [516/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:59:48.881 [517/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:59:48.881 [518/707] Linking target lib/librte_node.so.24.0 00:59:48.881 [519/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:59:48.881 [520/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:59:48.881 [521/707] Linking static target drivers/librte_bus_pci.a 00:59:48.881 [522/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:59:48.881 [523/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:59:48.881 [524/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:59:48.881 [525/707] Linking static target drivers/librte_bus_vdev.a 00:59:49.138 [526/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:59:49.138 [527/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:59:49.138 [528/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:59:49.138 [529/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:59:49.138 [530/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:59:49.138 [531/707] Linking target drivers/librte_bus_vdev.so.24.0 00:59:49.395 [532/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:59:49.395 [533/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:59:49.395 [534/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:59:49.395 [535/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:59:49.395 [536/707] Linking target drivers/librte_bus_pci.so.24.0 00:59:49.395 [537/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:59:49.395 [538/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:59:49.395 [539/707] Linking static target drivers/librte_mempool_ring.a 00:59:49.395 [540/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:59:49.652 [541/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:59:49.652 [542/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:59:49.652 [543/707] Linking target drivers/librte_mempool_ring.so.24.0 00:59:49.908 [544/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:59:50.165 [545/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:59:50.165 [546/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:59:50.165 [547/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:59:50.730 [548/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:59:50.987 [549/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:59:50.987 [550/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:59:50.987 [551/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:59:51.245 [552/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:59:51.245 [553/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:59:51.245 [554/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:59:51.503 [555/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:59:51.503 [556/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:59:51.503 [557/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:59:51.760 [558/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:59:51.760 [559/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:59:52.017 [560/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:59:52.274 [561/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:59:52.274 [562/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:59:52.274 [563/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:59:52.532 [564/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:59:52.532 [565/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:59:52.532 [566/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:59:52.532 [567/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:59:52.790 [568/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:59:52.790 [569/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:59:52.790 [570/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:59:53.048 [571/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:59:53.048 [572/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:59:53.048 [573/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:59:53.048 [574/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:59:53.048 [575/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:59:53.304 [576/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:59:53.304 [577/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:59:53.560 [578/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:59:53.560 [579/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:59:53.560 [580/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:59:53.560 [581/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:59:53.815 [582/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:59:53.815 [583/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:59:53.816 [584/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:59:53.816 [585/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:59:53.816 [586/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:59:53.816 [587/707] Linking static target drivers/librte_net_i40e.a 00:59:53.816 [588/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:59:54.072 [589/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:59:54.072 [590/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:59:54.329 [591/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:59:54.329 [592/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:59:54.598 [593/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:59:54.598 [594/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:59:54.598 [595/707] Linking target drivers/librte_net_i40e.so.24.0 00:59:54.598 [596/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:59:54.598 [597/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:59:54.598 [598/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:59:54.858 [599/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:59:55.115 [600/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:59:55.115 [601/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:59:55.115 [602/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:59:55.372 [603/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:59:55.372 [604/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:59:55.372 [605/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:59:55.372 [606/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:59:55.629 [607/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:59:55.629 [608/707] Linking static target lib/librte_vhost.a 00:59:55.629 [609/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:59:55.629 [610/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:59:55.629 [611/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:59:55.629 [612/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:59:55.629 [613/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:59:55.886 [614/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:59:55.886 [615/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:59:56.142 [616/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:59:56.143 [617/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:59:56.143 [618/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:59:56.707 [619/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:59:56.707 [620/707] Linking target lib/librte_vhost.so.24.0 00:59:56.965 [621/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:59:56.965 [622/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:59:56.965 [623/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:59:56.965 [624/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:59:56.965 [625/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:59:57.223 [626/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:59:57.223 [627/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:59:57.223 [628/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:59:57.223 [629/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:59:57.223 [630/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:59:57.481 [631/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:59:57.481 [632/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:59:57.481 [633/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:59:57.481 [634/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:59:57.739 [635/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:59:57.739 [636/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:59:57.739 [637/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:59:57.997 [638/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:59:57.997 [639/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:59:57.997 [640/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:59:58.255 [641/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:59:58.255 [642/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:59:58.255 [643/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:59:58.255 [644/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:59:58.255 [645/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:59:58.255 [646/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:59:58.512 [647/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:59:58.769 [648/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:59:58.769 [649/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:59:58.769 [650/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:59:58.769 [651/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:59:58.769 [652/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:59:59.025 [653/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:59:59.025 [654/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:59:59.025 [655/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:59:59.025 [656/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:59:59.281 [657/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:59:59.281 [658/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:59:59.281 [659/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:59:59.538 [660/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:59:59.797 [661/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:59:59.797 [662/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:59:59.797 [663/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:59:59.797 [664/707] Linking static target lib/librte_pipeline.a 00:59:59.797 [665/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:59:59.797 [666/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 01:00:00.054 [667/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 01:00:00.313 [668/707] Linking target app/dpdk-graph 01:00:00.313 [669/707] Linking target app/dpdk-dumpcap 01:00:00.313 [670/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 01:00:00.571 [671/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 01:00:00.571 [672/707] Linking target app/dpdk-pdump 01:00:00.571 [673/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 01:00:00.832 [674/707] Linking target app/dpdk-proc-info 01:00:00.832 [675/707] Linking target app/dpdk-test-acl 01:00:01.093 [676/707] Linking target app/dpdk-test-cmdline 01:00:01.093 [677/707] Linking target app/dpdk-test-bbdev 01:00:01.093 [678/707] Linking target app/dpdk-test-compress-perf 01:00:01.093 [679/707] Linking target app/dpdk-test-crypto-perf 01:00:01.093 [680/707] Linking target app/dpdk-test-dma-perf 01:00:01.356 [681/707] Linking target app/dpdk-test-eventdev 01:00:01.356 [682/707] Linking target app/dpdk-test-fib 01:00:01.619 [683/707] Linking target app/dpdk-test-gpudev 01:00:01.619 [684/707] Linking target app/dpdk-test-flow-perf 01:00:01.619 [685/707] Linking target app/dpdk-test-mldev 01:00:01.619 [686/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 01:00:01.619 [687/707] Linking target app/dpdk-test-pipeline 01:00:01.619 [688/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 01:00:01.878 [689/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 01:00:01.878 [690/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 01:00:01.878 [691/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 01:00:01.878 [692/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 01:00:02.137 [693/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 01:00:02.137 [694/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 01:00:02.397 [695/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 01:00:02.397 [696/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 01:00:02.397 [697/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 01:00:02.656 [698/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 01:00:02.656 [699/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 01:00:02.656 [700/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 01:00:02.656 [701/707] Linking target lib/librte_pipeline.so.24.0 01:00:02.914 [702/707] Linking target app/dpdk-test-sad 01:00:02.914 [703/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 01:00:02.914 [704/707] Linking target app/dpdk-test-regex 01:00:03.172 [705/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 01:00:03.172 [706/707] Linking target app/dpdk-test-security-perf 01:00:03.740 [707/707] Linking target app/dpdk-testpmd 01:00:03.740 10:57:08 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 01:00:03.740 10:57:08 build_native_dpdk -- common/autobuild_common.sh@191 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 01:00:03.740 10:57:08 build_native_dpdk -- common/autobuild_common.sh@204 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 01:00:03.740 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 01:00:03.740 [0/1] Installing files. 01:00:04.002 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 01:00:04.002 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 01:00:04.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 01:00:04.004 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 01:00:04.006 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 01:00:04.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 01:00:04.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 01:00:04.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 01:00:04.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 01:00:04.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 01:00:04.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 01:00:04.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 01:00:04.007 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.007 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.267 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.267 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.267 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.267 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 01:00:04.267 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.267 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 01:00:04.267 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.267 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 01:00:04.267 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.267 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 01:00:04.267 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 01:00:04.267 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 01:00:04.267 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 01:00:04.267 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 01:00:04.267 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 01:00:04.267 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 01:00:04.267 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 01:00:04.267 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 01:00:04.267 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 01:00:04.267 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 01:00:04.267 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 01:00:04.267 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 01:00:04.267 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 01:00:04.267 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 01:00:04.267 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 01:00:04.267 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 01:00:04.267 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 01:00:04.267 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 01:00:04.267 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 01:00:04.267 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.267 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.268 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.269 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.529 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.529 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.529 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.529 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.529 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.529 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.529 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.529 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.529 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.529 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.529 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.529 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.529 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.529 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.529 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.529 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 01:00:04.530 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 01:00:04.530 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 01:00:04.531 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 01:00:04.531 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 01:00:04.531 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 01:00:04.531 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 01:00:04.531 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 01:00:04.531 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 01:00:04.531 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 01:00:04.531 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 01:00:04.531 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 01:00:04.531 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 01:00:04.531 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 01:00:04.531 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 01:00:04.531 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 01:00:04.531 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 01:00:04.531 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 01:00:04.531 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 01:00:04.531 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 01:00:04.531 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 01:00:04.531 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 01:00:04.531 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 01:00:04.531 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 01:00:04.531 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 01:00:04.531 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 01:00:04.531 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 01:00:04.531 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 01:00:04.531 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 01:00:04.531 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 01:00:04.531 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 01:00:04.531 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 01:00:04.531 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 01:00:04.531 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 01:00:04.531 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 01:00:04.531 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 01:00:04.531 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 01:00:04.531 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 01:00:04.531 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 01:00:04.531 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 01:00:04.531 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 01:00:04.531 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 01:00:04.531 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 01:00:04.531 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 01:00:04.531 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 01:00:04.531 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 01:00:04.531 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 01:00:04.531 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 01:00:04.531 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 01:00:04.531 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 01:00:04.531 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 01:00:04.531 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 01:00:04.531 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 01:00:04.531 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 01:00:04.531 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 01:00:04.531 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 01:00:04.531 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 01:00:04.531 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 01:00:04.531 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 01:00:04.531 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 01:00:04.531 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 01:00:04.531 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 01:00:04.531 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 01:00:04.531 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 01:00:04.531 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 01:00:04.531 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 01:00:04.531 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 01:00:04.531 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 01:00:04.531 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 01:00:04.531 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 01:00:04.531 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 01:00:04.531 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 01:00:04.531 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 01:00:04.531 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 01:00:04.531 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 01:00:04.531 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 01:00:04.531 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 01:00:04.531 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 01:00:04.531 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 01:00:04.531 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 01:00:04.531 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 01:00:04.531 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 01:00:04.531 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 01:00:04.531 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 01:00:04.531 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 01:00:04.531 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 01:00:04.531 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 01:00:04.531 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 01:00:04.531 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 01:00:04.531 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 01:00:04.531 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 01:00:04.531 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 01:00:04.531 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 01:00:04.531 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 01:00:04.531 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 01:00:04.531 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 01:00:04.531 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 01:00:04.531 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 01:00:04.531 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 01:00:04.531 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 01:00:04.531 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 01:00:04.531 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 01:00:04.531 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 01:00:04.531 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 01:00:04.531 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 01:00:04.531 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 01:00:04.531 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 01:00:04.531 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 01:00:04.531 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 01:00:04.531 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 01:00:04.531 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 01:00:04.531 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 01:00:04.531 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 01:00:04.531 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 01:00:04.531 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 01:00:04.531 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 01:00:04.531 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 01:00:04.532 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 01:00:04.532 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 01:00:04.532 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 01:00:04.532 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 01:00:04.532 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 01:00:04.532 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 01:00:04.532 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 01:00:04.532 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 01:00:04.532 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 01:00:04.532 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 01:00:04.532 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 01:00:04.532 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 01:00:04.532 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 01:00:04.532 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 01:00:04.532 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 01:00:04.532 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 01:00:04.532 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 01:00:04.532 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 01:00:04.532 10:57:09 build_native_dpdk -- common/autobuild_common.sh@210 -- $ cat 01:00:04.532 ************************************ 01:00:04.532 END TEST build_native_dpdk 01:00:04.532 10:57:09 build_native_dpdk -- common/autobuild_common.sh@215 -- $ cd /home/vagrant/spdk_repo/spdk 01:00:04.532 01:00:04.532 real 0m46.801s 01:00:04.532 user 5m12.622s 01:00:04.532 sys 1m4.227s 01:00:04.532 10:57:09 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 01:00:04.532 10:57:09 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 01:00:04.532 ************************************ 01:00:04.532 10:57:09 -- common/autotest_common.sh@1142 -- $ return 0 01:00:04.532 10:57:09 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 01:00:04.532 10:57:09 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 01:00:04.532 10:57:09 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 01:00:04.532 10:57:09 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 01:00:04.532 10:57:09 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 01:00:04.532 10:57:09 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 01:00:04.532 10:57:09 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 01:00:04.532 10:57:09 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 01:00:04.790 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 01:00:04.790 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 01:00:04.790 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 01:00:05.049 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 01:00:05.308 Using 'verbs' RDMA provider 01:00:21.569 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 01:00:39.691 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 01:00:39.691 Creating mk/config.mk...done. 01:00:39.691 Creating mk/cc.flags.mk...done. 01:00:39.691 Type 'make' to build. 01:00:39.691 10:57:42 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 01:00:39.691 10:57:42 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 01:00:39.691 10:57:42 -- common/autotest_common.sh@1105 -- $ xtrace_disable 01:00:39.691 10:57:42 -- common/autotest_common.sh@10 -- $ set +x 01:00:39.691 ************************************ 01:00:39.691 START TEST make 01:00:39.691 ************************************ 01:00:39.691 10:57:42 make -- common/autotest_common.sh@1123 -- $ make -j10 01:00:39.691 make[1]: Nothing to be done for 'all'. 01:00:57.764 CC lib/ut_mock/mock.o 01:00:57.764 CC lib/log/log.o 01:00:57.764 CC lib/log/log_flags.o 01:00:57.764 CC lib/log/log_deprecated.o 01:00:57.764 CC lib/ut/ut.o 01:00:57.764 LIB libspdk_ut_mock.a 01:00:57.764 LIB libspdk_log.a 01:00:57.764 LIB libspdk_ut.a 01:00:57.764 SO libspdk_ut_mock.so.6.0 01:00:57.764 SO libspdk_ut.so.2.0 01:00:57.764 SO libspdk_log.so.7.0 01:00:57.764 SYMLINK libspdk_ut_mock.so 01:00:57.764 SYMLINK libspdk_log.so 01:00:57.764 SYMLINK libspdk_ut.so 01:00:57.764 CC lib/util/base64.o 01:00:57.764 CC lib/util/bit_array.o 01:00:57.764 CC lib/util/crc16.o 01:00:57.764 CC lib/util/cpuset.o 01:00:57.764 CC lib/dma/dma.o 01:00:57.764 CC lib/util/crc32.o 01:00:57.764 CC lib/util/crc32c.o 01:00:57.764 CXX lib/trace_parser/trace.o 01:00:57.764 CC lib/ioat/ioat.o 01:00:57.764 CC lib/util/crc32_ieee.o 01:00:57.764 CC lib/vfio_user/host/vfio_user_pci.o 01:00:57.764 CC lib/util/crc64.o 01:00:57.764 CC lib/util/dif.o 01:00:57.764 CC lib/util/fd.o 01:00:57.764 LIB libspdk_dma.a 01:00:57.764 CC lib/util/fd_group.o 01:00:57.764 CC lib/vfio_user/host/vfio_user.o 01:00:57.764 SO libspdk_dma.so.4.0 01:00:57.764 CC lib/util/file.o 01:00:57.764 CC lib/util/hexlify.o 01:00:57.764 LIB libspdk_ioat.a 01:00:57.764 SYMLINK libspdk_dma.so 01:00:57.764 CC lib/util/iov.o 01:00:57.765 CC lib/util/math.o 01:00:57.765 SO libspdk_ioat.so.7.0 01:00:57.765 CC lib/util/net.o 01:00:57.765 SYMLINK libspdk_ioat.so 01:00:57.765 CC lib/util/pipe.o 01:00:57.765 CC lib/util/strerror_tls.o 01:00:57.765 LIB libspdk_vfio_user.a 01:00:57.765 CC lib/util/string.o 01:00:57.765 CC lib/util/uuid.o 01:00:57.765 SO libspdk_vfio_user.so.5.0 01:00:57.765 CC lib/util/xor.o 01:00:57.765 CC lib/util/zipf.o 01:00:57.765 SYMLINK libspdk_vfio_user.so 01:00:58.023 LIB libspdk_util.a 01:00:58.023 SO libspdk_util.so.10.0 01:00:58.287 LIB libspdk_trace_parser.a 01:00:58.287 SO libspdk_trace_parser.so.5.0 01:00:58.287 SYMLINK libspdk_util.so 01:00:58.287 SYMLINK libspdk_trace_parser.so 01:00:58.287 CC lib/conf/conf.o 01:00:58.287 CC lib/json/json_parse.o 01:00:58.287 CC lib/rdma_provider/common.o 01:00:58.287 CC lib/json/json_util.o 01:00:58.287 CC lib/vmd/vmd.o 01:00:58.287 CC lib/rdma_provider/rdma_provider_verbs.o 01:00:58.287 CC lib/json/json_write.o 01:00:58.287 CC lib/rdma_utils/rdma_utils.o 01:00:58.287 CC lib/env_dpdk/env.o 01:00:58.287 CC lib/idxd/idxd.o 01:00:58.555 CC lib/vmd/led.o 01:00:58.555 LIB libspdk_rdma_provider.a 01:00:58.555 SO libspdk_rdma_provider.so.6.0 01:00:58.555 LIB libspdk_conf.a 01:00:58.555 CC lib/env_dpdk/memory.o 01:00:58.555 CC lib/env_dpdk/pci.o 01:00:58.555 LIB libspdk_rdma_utils.a 01:00:58.555 SO libspdk_conf.so.6.0 01:00:58.555 LIB libspdk_json.a 01:00:58.555 SYMLINK libspdk_rdma_provider.so 01:00:58.555 SO libspdk_rdma_utils.so.1.0 01:00:58.555 CC lib/env_dpdk/init.o 01:00:58.555 SO libspdk_json.so.6.0 01:00:58.555 SYMLINK libspdk_conf.so 01:00:58.817 CC lib/env_dpdk/threads.o 01:00:58.817 CC lib/env_dpdk/pci_ioat.o 01:00:58.817 SYMLINK libspdk_rdma_utils.so 01:00:58.817 CC lib/env_dpdk/pci_virtio.o 01:00:58.817 SYMLINK libspdk_json.so 01:00:58.817 CC lib/env_dpdk/pci_vmd.o 01:00:58.817 CC lib/env_dpdk/pci_idxd.o 01:00:58.817 CC lib/env_dpdk/pci_event.o 01:00:58.817 CC lib/env_dpdk/sigbus_handler.o 01:00:58.817 CC lib/idxd/idxd_user.o 01:00:58.817 CC lib/env_dpdk/pci_dpdk.o 01:00:58.817 CC lib/env_dpdk/pci_dpdk_2207.o 01:00:58.817 LIB libspdk_vmd.a 01:00:59.075 SO libspdk_vmd.so.6.0 01:00:59.075 CC lib/idxd/idxd_kernel.o 01:00:59.075 CC lib/env_dpdk/pci_dpdk_2211.o 01:00:59.075 SYMLINK libspdk_vmd.so 01:00:59.075 LIB libspdk_idxd.a 01:00:59.075 SO libspdk_idxd.so.12.0 01:00:59.333 SYMLINK libspdk_idxd.so 01:00:59.333 CC lib/jsonrpc/jsonrpc_server.o 01:00:59.333 CC lib/jsonrpc/jsonrpc_client.o 01:00:59.333 CC lib/jsonrpc/jsonrpc_server_tcp.o 01:00:59.333 CC lib/jsonrpc/jsonrpc_client_tcp.o 01:00:59.592 LIB libspdk_jsonrpc.a 01:00:59.592 LIB libspdk_env_dpdk.a 01:00:59.592 SO libspdk_jsonrpc.so.6.0 01:00:59.592 SO libspdk_env_dpdk.so.14.1 01:00:59.592 SYMLINK libspdk_jsonrpc.so 01:00:59.851 SYMLINK libspdk_env_dpdk.so 01:01:00.110 CC lib/rpc/rpc.o 01:01:00.368 LIB libspdk_rpc.a 01:01:00.368 SO libspdk_rpc.so.6.0 01:01:00.368 SYMLINK libspdk_rpc.so 01:01:00.937 CC lib/keyring/keyring.o 01:01:00.937 CC lib/trace/trace.o 01:01:00.937 CC lib/trace/trace_flags.o 01:01:00.937 CC lib/keyring/keyring_rpc.o 01:01:00.937 CC lib/trace/trace_rpc.o 01:01:00.937 CC lib/notify/notify.o 01:01:00.937 CC lib/notify/notify_rpc.o 01:01:00.937 LIB libspdk_notify.a 01:01:00.937 LIB libspdk_keyring.a 01:01:00.937 SO libspdk_notify.so.6.0 01:01:00.937 LIB libspdk_trace.a 01:01:01.196 SO libspdk_keyring.so.1.0 01:01:01.196 SYMLINK libspdk_notify.so 01:01:01.196 SO libspdk_trace.so.10.0 01:01:01.196 SYMLINK libspdk_keyring.so 01:01:01.196 SYMLINK libspdk_trace.so 01:01:01.454 CC lib/sock/sock.o 01:01:01.454 CC lib/sock/sock_rpc.o 01:01:01.454 CC lib/thread/thread.o 01:01:01.454 CC lib/thread/iobuf.o 01:01:02.021 LIB libspdk_sock.a 01:01:02.021 SO libspdk_sock.so.10.0 01:01:02.021 SYMLINK libspdk_sock.so 01:01:02.587 CC lib/nvme/nvme_ctrlr_cmd.o 01:01:02.587 CC lib/nvme/nvme_ctrlr.o 01:01:02.587 CC lib/nvme/nvme_fabric.o 01:01:02.587 CC lib/nvme/nvme_ns_cmd.o 01:01:02.587 CC lib/nvme/nvme_pcie_common.o 01:01:02.587 CC lib/nvme/nvme_ns.o 01:01:02.587 CC lib/nvme/nvme_pcie.o 01:01:02.587 CC lib/nvme/nvme_qpair.o 01:01:02.587 CC lib/nvme/nvme.o 01:01:02.845 LIB libspdk_thread.a 01:01:02.845 SO libspdk_thread.so.10.1 01:01:02.845 SYMLINK libspdk_thread.so 01:01:02.845 CC lib/nvme/nvme_quirks.o 01:01:03.103 CC lib/nvme/nvme_transport.o 01:01:03.103 CC lib/nvme/nvme_discovery.o 01:01:03.103 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 01:01:03.103 CC lib/nvme/nvme_ns_ocssd_cmd.o 01:01:03.103 CC lib/nvme/nvme_tcp.o 01:01:03.103 CC lib/nvme/nvme_opal.o 01:01:03.103 CC lib/nvme/nvme_io_msg.o 01:01:03.361 CC lib/nvme/nvme_poll_group.o 01:01:03.361 CC lib/nvme/nvme_zns.o 01:01:03.619 CC lib/nvme/nvme_stubs.o 01:01:03.619 CC lib/nvme/nvme_auth.o 01:01:03.619 CC lib/nvme/nvme_cuse.o 01:01:03.619 CC lib/nvme/nvme_rdma.o 01:01:03.878 CC lib/accel/accel.o 01:01:03.878 CC lib/accel/accel_rpc.o 01:01:03.878 CC lib/accel/accel_sw.o 01:01:03.878 CC lib/blob/blobstore.o 01:01:04.136 CC lib/blob/request.o 01:01:04.136 CC lib/blob/zeroes.o 01:01:04.136 CC lib/blob/blob_bs_dev.o 01:01:04.394 CC lib/init/json_config.o 01:01:04.394 CC lib/init/subsystem.o 01:01:04.394 CC lib/init/subsystem_rpc.o 01:01:04.394 CC lib/init/rpc.o 01:01:04.394 CC lib/virtio/virtio.o 01:01:04.394 CC lib/virtio/virtio_vfio_user.o 01:01:04.394 CC lib/virtio/virtio_vhost_user.o 01:01:04.394 CC lib/virtio/virtio_pci.o 01:01:04.652 LIB libspdk_init.a 01:01:04.652 SO libspdk_init.so.5.0 01:01:04.652 LIB libspdk_accel.a 01:01:04.652 SO libspdk_accel.so.16.0 01:01:04.652 SYMLINK libspdk_init.so 01:01:04.652 LIB libspdk_nvme.a 01:01:04.911 SYMLINK libspdk_accel.so 01:01:04.911 LIB libspdk_virtio.a 01:01:04.911 SO libspdk_virtio.so.7.0 01:01:04.911 SYMLINK libspdk_virtio.so 01:01:04.911 SO libspdk_nvme.so.13.1 01:01:04.911 CC lib/event/app.o 01:01:04.911 CC lib/event/reactor.o 01:01:04.911 CC lib/event/log_rpc.o 01:01:04.911 CC lib/event/app_rpc.o 01:01:04.911 CC lib/event/scheduler_static.o 01:01:05.168 CC lib/bdev/bdev.o 01:01:05.168 CC lib/bdev/bdev_rpc.o 01:01:05.168 CC lib/bdev/bdev_zone.o 01:01:05.168 CC lib/bdev/part.o 01:01:05.168 CC lib/bdev/scsi_nvme.o 01:01:05.168 SYMLINK libspdk_nvme.so 01:01:05.425 LIB libspdk_event.a 01:01:05.425 SO libspdk_event.so.14.0 01:01:05.425 SYMLINK libspdk_event.so 01:01:06.356 LIB libspdk_blob.a 01:01:06.356 SO libspdk_blob.so.11.0 01:01:06.613 SYMLINK libspdk_blob.so 01:01:06.871 CC lib/blobfs/tree.o 01:01:06.871 CC lib/blobfs/blobfs.o 01:01:06.871 CC lib/lvol/lvol.o 01:01:07.129 LIB libspdk_bdev.a 01:01:07.388 SO libspdk_bdev.so.16.0 01:01:07.388 SYMLINK libspdk_bdev.so 01:01:07.645 LIB libspdk_blobfs.a 01:01:07.645 SO libspdk_blobfs.so.10.0 01:01:07.645 CC lib/nvmf/ctrlr.o 01:01:07.645 CC lib/nvmf/subsystem.o 01:01:07.645 CC lib/nvmf/ctrlr_discovery.o 01:01:07.645 CC lib/nvmf/ctrlr_bdev.o 01:01:07.645 CC lib/scsi/dev.o 01:01:07.645 CC lib/ftl/ftl_core.o 01:01:07.645 CC lib/ublk/ublk.o 01:01:07.645 CC lib/nbd/nbd.o 01:01:07.645 SYMLINK libspdk_blobfs.so 01:01:07.645 CC lib/ftl/ftl_init.o 01:01:07.645 LIB libspdk_lvol.a 01:01:07.901 SO libspdk_lvol.so.10.0 01:01:07.901 SYMLINK libspdk_lvol.so 01:01:07.901 CC lib/ftl/ftl_layout.o 01:01:07.901 CC lib/scsi/lun.o 01:01:07.901 CC lib/scsi/port.o 01:01:07.901 CC lib/scsi/scsi.o 01:01:08.165 CC lib/nbd/nbd_rpc.o 01:01:08.165 CC lib/ublk/ublk_rpc.o 01:01:08.165 CC lib/nvmf/nvmf.o 01:01:08.165 CC lib/ftl/ftl_debug.o 01:01:08.165 CC lib/ftl/ftl_io.o 01:01:08.165 CC lib/scsi/scsi_bdev.o 01:01:08.165 LIB libspdk_nbd.a 01:01:08.165 CC lib/scsi/scsi_pr.o 01:01:08.165 LIB libspdk_ublk.a 01:01:08.165 SO libspdk_nbd.so.7.0 01:01:08.165 SO libspdk_ublk.so.3.0 01:01:08.165 CC lib/nvmf/nvmf_rpc.o 01:01:08.432 SYMLINK libspdk_ublk.so 01:01:08.432 SYMLINK libspdk_nbd.so 01:01:08.432 CC lib/scsi/scsi_rpc.o 01:01:08.432 CC lib/scsi/task.o 01:01:08.432 CC lib/ftl/ftl_sb.o 01:01:08.432 CC lib/ftl/ftl_l2p.o 01:01:08.432 CC lib/ftl/ftl_l2p_flat.o 01:01:08.432 CC lib/nvmf/transport.o 01:01:08.432 CC lib/ftl/ftl_nv_cache.o 01:01:08.432 CC lib/ftl/ftl_band.o 01:01:08.690 CC lib/nvmf/tcp.o 01:01:08.690 LIB libspdk_scsi.a 01:01:08.690 SO libspdk_scsi.so.9.0 01:01:08.690 CC lib/ftl/ftl_band_ops.o 01:01:08.690 CC lib/nvmf/stubs.o 01:01:08.690 SYMLINK libspdk_scsi.so 01:01:08.690 CC lib/nvmf/mdns_server.o 01:01:08.948 CC lib/nvmf/rdma.o 01:01:08.948 CC lib/nvmf/auth.o 01:01:08.948 CC lib/iscsi/conn.o 01:01:08.948 CC lib/iscsi/init_grp.o 01:01:08.948 CC lib/vhost/vhost.o 01:01:08.948 CC lib/vhost/vhost_rpc.o 01:01:09.206 CC lib/vhost/vhost_scsi.o 01:01:09.206 CC lib/vhost/vhost_blk.o 01:01:09.207 CC lib/iscsi/iscsi.o 01:01:09.207 CC lib/ftl/ftl_writer.o 01:01:09.464 CC lib/ftl/ftl_rq.o 01:01:09.464 CC lib/iscsi/md5.o 01:01:09.722 CC lib/iscsi/param.o 01:01:09.722 CC lib/iscsi/portal_grp.o 01:01:09.722 CC lib/iscsi/tgt_node.o 01:01:09.722 CC lib/ftl/ftl_reloc.o 01:01:09.722 CC lib/ftl/ftl_l2p_cache.o 01:01:09.722 CC lib/vhost/rte_vhost_user.o 01:01:09.722 CC lib/iscsi/iscsi_subsystem.o 01:01:09.980 CC lib/iscsi/iscsi_rpc.o 01:01:09.980 CC lib/iscsi/task.o 01:01:09.980 CC lib/ftl/ftl_p2l.o 01:01:09.980 CC lib/ftl/mngt/ftl_mngt.o 01:01:09.980 CC lib/ftl/mngt/ftl_mngt_bdev.o 01:01:09.980 CC lib/ftl/mngt/ftl_mngt_shutdown.o 01:01:10.237 CC lib/ftl/mngt/ftl_mngt_startup.o 01:01:10.237 CC lib/ftl/mngt/ftl_mngt_md.o 01:01:10.237 CC lib/ftl/mngt/ftl_mngt_misc.o 01:01:10.237 CC lib/ftl/mngt/ftl_mngt_ioch.o 01:01:10.237 CC lib/ftl/mngt/ftl_mngt_l2p.o 01:01:10.237 CC lib/ftl/mngt/ftl_mngt_band.o 01:01:10.237 CC lib/ftl/mngt/ftl_mngt_self_test.o 01:01:10.237 CC lib/ftl/mngt/ftl_mngt_p2l.o 01:01:10.237 CC lib/ftl/mngt/ftl_mngt_recovery.o 01:01:10.496 CC lib/ftl/mngt/ftl_mngt_upgrade.o 01:01:10.496 LIB libspdk_iscsi.a 01:01:10.496 CC lib/ftl/utils/ftl_conf.o 01:01:10.496 CC lib/ftl/utils/ftl_md.o 01:01:10.496 CC lib/ftl/utils/ftl_mempool.o 01:01:10.496 CC lib/ftl/utils/ftl_bitmap.o 01:01:10.496 SO libspdk_iscsi.so.8.0 01:01:10.496 CC lib/ftl/utils/ftl_property.o 01:01:10.496 LIB libspdk_nvmf.a 01:01:10.496 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 01:01:10.496 CC lib/ftl/upgrade/ftl_layout_upgrade.o 01:01:10.496 CC lib/ftl/upgrade/ftl_sb_upgrade.o 01:01:10.496 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 01:01:10.757 SYMLINK libspdk_iscsi.so 01:01:10.757 CC lib/ftl/upgrade/ftl_band_upgrade.o 01:01:10.757 SO libspdk_nvmf.so.19.0 01:01:10.757 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 01:01:10.757 LIB libspdk_vhost.a 01:01:10.757 CC lib/ftl/upgrade/ftl_trim_upgrade.o 01:01:10.757 CC lib/ftl/upgrade/ftl_sb_v3.o 01:01:10.757 CC lib/ftl/upgrade/ftl_sb_v5.o 01:01:10.757 SO libspdk_vhost.so.8.0 01:01:10.757 CC lib/ftl/nvc/ftl_nvc_dev.o 01:01:10.757 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 01:01:10.757 CC lib/ftl/base/ftl_base_dev.o 01:01:10.757 SYMLINK libspdk_nvmf.so 01:01:10.757 CC lib/ftl/base/ftl_base_bdev.o 01:01:10.757 CC lib/ftl/ftl_trace.o 01:01:11.021 SYMLINK libspdk_vhost.so 01:01:11.021 LIB libspdk_ftl.a 01:01:11.278 SO libspdk_ftl.so.9.0 01:01:11.536 SYMLINK libspdk_ftl.so 01:01:12.100 CC module/env_dpdk/env_dpdk_rpc.o 01:01:12.100 CC module/accel/ioat/accel_ioat.o 01:01:12.100 CC module/scheduler/dpdk_governor/dpdk_governor.o 01:01:12.100 CC module/scheduler/gscheduler/gscheduler.o 01:01:12.100 CC module/keyring/linux/keyring.o 01:01:12.100 CC module/scheduler/dynamic/scheduler_dynamic.o 01:01:12.100 CC module/keyring/file/keyring.o 01:01:12.100 CC module/sock/posix/posix.o 01:01:12.100 CC module/blob/bdev/blob_bdev.o 01:01:12.100 CC module/accel/error/accel_error.o 01:01:12.100 LIB libspdk_env_dpdk_rpc.a 01:01:12.100 SO libspdk_env_dpdk_rpc.so.6.0 01:01:12.357 SYMLINK libspdk_env_dpdk_rpc.so 01:01:12.357 CC module/keyring/file/keyring_rpc.o 01:01:12.357 CC module/keyring/linux/keyring_rpc.o 01:01:12.357 LIB libspdk_scheduler_dpdk_governor.a 01:01:12.357 LIB libspdk_scheduler_gscheduler.a 01:01:12.357 CC module/accel/ioat/accel_ioat_rpc.o 01:01:12.357 SO libspdk_scheduler_gscheduler.so.4.0 01:01:12.357 SO libspdk_scheduler_dpdk_governor.so.4.0 01:01:12.357 CC module/accel/error/accel_error_rpc.o 01:01:12.357 LIB libspdk_scheduler_dynamic.a 01:01:12.357 SO libspdk_scheduler_dynamic.so.4.0 01:01:12.357 SYMLINK libspdk_scheduler_dpdk_governor.so 01:01:12.357 SYMLINK libspdk_scheduler_gscheduler.so 01:01:12.357 LIB libspdk_blob_bdev.a 01:01:12.357 LIB libspdk_keyring_linux.a 01:01:12.357 LIB libspdk_keyring_file.a 01:01:12.357 SO libspdk_blob_bdev.so.11.0 01:01:12.357 SO libspdk_keyring_file.so.1.0 01:01:12.357 SO libspdk_keyring_linux.so.1.0 01:01:12.357 SYMLINK libspdk_scheduler_dynamic.so 01:01:12.357 LIB libspdk_accel_ioat.a 01:01:12.357 CC module/sock/uring/uring.o 01:01:12.357 LIB libspdk_accel_error.a 01:01:12.357 SO libspdk_accel_ioat.so.6.0 01:01:12.357 SYMLINK libspdk_blob_bdev.so 01:01:12.357 SYMLINK libspdk_keyring_file.so 01:01:12.357 SYMLINK libspdk_keyring_linux.so 01:01:12.357 SO libspdk_accel_error.so.2.0 01:01:12.614 SYMLINK libspdk_accel_ioat.so 01:01:12.614 CC module/accel/iaa/accel_iaa.o 01:01:12.614 CC module/accel/iaa/accel_iaa_rpc.o 01:01:12.614 SYMLINK libspdk_accel_error.so 01:01:12.614 CC module/accel/dsa/accel_dsa.o 01:01:12.614 CC module/accel/dsa/accel_dsa_rpc.o 01:01:12.614 CC module/bdev/error/vbdev_error.o 01:01:12.614 LIB libspdk_accel_iaa.a 01:01:12.614 CC module/bdev/gpt/gpt.o 01:01:12.872 CC module/bdev/delay/vbdev_delay.o 01:01:12.872 SO libspdk_accel_iaa.so.3.0 01:01:12.872 LIB libspdk_sock_posix.a 01:01:12.872 LIB libspdk_accel_dsa.a 01:01:12.872 CC module/blobfs/bdev/blobfs_bdev.o 01:01:12.872 SO libspdk_sock_posix.so.6.0 01:01:12.872 SO libspdk_accel_dsa.so.5.0 01:01:12.872 CC module/bdev/lvol/vbdev_lvol.o 01:01:12.872 SYMLINK libspdk_accel_iaa.so 01:01:12.872 CC module/bdev/lvol/vbdev_lvol_rpc.o 01:01:12.872 CC module/bdev/malloc/bdev_malloc.o 01:01:12.872 SYMLINK libspdk_accel_dsa.so 01:01:12.872 CC module/bdev/malloc/bdev_malloc_rpc.o 01:01:12.872 SYMLINK libspdk_sock_posix.so 01:01:12.872 CC module/bdev/gpt/vbdev_gpt.o 01:01:12.872 CC module/blobfs/bdev/blobfs_bdev_rpc.o 01:01:12.872 CC module/bdev/error/vbdev_error_rpc.o 01:01:13.130 LIB libspdk_sock_uring.a 01:01:13.130 SO libspdk_sock_uring.so.5.0 01:01:13.130 CC module/bdev/null/bdev_null.o 01:01:13.130 CC module/bdev/delay/vbdev_delay_rpc.o 01:01:13.130 LIB libspdk_blobfs_bdev.a 01:01:13.130 SYMLINK libspdk_sock_uring.so 01:01:13.130 LIB libspdk_bdev_error.a 01:01:13.130 SO libspdk_blobfs_bdev.so.6.0 01:01:13.130 LIB libspdk_bdev_gpt.a 01:01:13.130 SO libspdk_bdev_error.so.6.0 01:01:13.130 SO libspdk_bdev_gpt.so.6.0 01:01:13.130 CC module/bdev/nvme/bdev_nvme.o 01:01:13.130 LIB libspdk_bdev_malloc.a 01:01:13.130 SYMLINK libspdk_blobfs_bdev.so 01:01:13.130 SYMLINK libspdk_bdev_error.so 01:01:13.130 CC module/bdev/nvme/bdev_nvme_rpc.o 01:01:13.130 LIB libspdk_bdev_delay.a 01:01:13.130 SO libspdk_bdev_malloc.so.6.0 01:01:13.130 SYMLINK libspdk_bdev_gpt.so 01:01:13.130 LIB libspdk_bdev_lvol.a 01:01:13.389 SO libspdk_bdev_delay.so.6.0 01:01:13.389 CC module/bdev/passthru/vbdev_passthru.o 01:01:13.389 SO libspdk_bdev_lvol.so.6.0 01:01:13.389 CC module/bdev/null/bdev_null_rpc.o 01:01:13.389 SYMLINK libspdk_bdev_malloc.so 01:01:13.389 CC module/bdev/raid/bdev_raid.o 01:01:13.389 SYMLINK libspdk_bdev_delay.so 01:01:13.389 CC module/bdev/nvme/nvme_rpc.o 01:01:13.389 CC module/bdev/nvme/bdev_mdns_client.o 01:01:13.389 SYMLINK libspdk_bdev_lvol.so 01:01:13.389 CC module/bdev/nvme/vbdev_opal.o 01:01:13.389 CC module/bdev/split/vbdev_split.o 01:01:13.389 CC module/bdev/zone_block/vbdev_zone_block.o 01:01:13.389 LIB libspdk_bdev_null.a 01:01:13.389 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 01:01:13.389 SO libspdk_bdev_null.so.6.0 01:01:13.648 CC module/bdev/passthru/vbdev_passthru_rpc.o 01:01:13.648 CC module/bdev/nvme/vbdev_opal_rpc.o 01:01:13.648 SYMLINK libspdk_bdev_null.so 01:01:13.648 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 01:01:13.648 CC module/bdev/raid/bdev_raid_rpc.o 01:01:13.648 CC module/bdev/split/vbdev_split_rpc.o 01:01:13.648 LIB libspdk_bdev_zone_block.a 01:01:13.648 LIB libspdk_bdev_passthru.a 01:01:13.648 SO libspdk_bdev_zone_block.so.6.0 01:01:13.648 SO libspdk_bdev_passthru.so.6.0 01:01:13.648 LIB libspdk_bdev_split.a 01:01:13.648 CC module/bdev/raid/bdev_raid_sb.o 01:01:13.648 SYMLINK libspdk_bdev_zone_block.so 01:01:13.906 SO libspdk_bdev_split.so.6.0 01:01:13.906 SYMLINK libspdk_bdev_passthru.so 01:01:13.906 CC module/bdev/raid/raid0.o 01:01:13.906 CC module/bdev/uring/bdev_uring.o 01:01:13.906 SYMLINK libspdk_bdev_split.so 01:01:13.906 CC module/bdev/raid/raid1.o 01:01:13.906 CC module/bdev/aio/bdev_aio.o 01:01:13.906 CC module/bdev/ftl/bdev_ftl.o 01:01:13.906 CC module/bdev/iscsi/bdev_iscsi.o 01:01:13.906 CC module/bdev/virtio/bdev_virtio_scsi.o 01:01:13.906 CC module/bdev/iscsi/bdev_iscsi_rpc.o 01:01:13.906 CC module/bdev/uring/bdev_uring_rpc.o 01:01:14.165 CC module/bdev/raid/concat.o 01:01:14.165 CC module/bdev/aio/bdev_aio_rpc.o 01:01:14.165 CC module/bdev/virtio/bdev_virtio_blk.o 01:01:14.165 CC module/bdev/ftl/bdev_ftl_rpc.o 01:01:14.165 CC module/bdev/virtio/bdev_virtio_rpc.o 01:01:14.165 LIB libspdk_bdev_uring.a 01:01:14.165 SO libspdk_bdev_uring.so.6.0 01:01:14.165 LIB libspdk_bdev_iscsi.a 01:01:14.165 LIB libspdk_bdev_raid.a 01:01:14.165 SYMLINK libspdk_bdev_uring.so 01:01:14.165 LIB libspdk_bdev_aio.a 01:01:14.165 SO libspdk_bdev_iscsi.so.6.0 01:01:14.165 SO libspdk_bdev_aio.so.6.0 01:01:14.423 SO libspdk_bdev_raid.so.6.0 01:01:14.423 SYMLINK libspdk_bdev_iscsi.so 01:01:14.423 LIB libspdk_bdev_ftl.a 01:01:14.423 SYMLINK libspdk_bdev_aio.so 01:01:14.423 SO libspdk_bdev_ftl.so.6.0 01:01:14.423 LIB libspdk_bdev_virtio.a 01:01:14.423 SYMLINK libspdk_bdev_raid.so 01:01:14.423 SYMLINK libspdk_bdev_ftl.so 01:01:14.423 SO libspdk_bdev_virtio.so.6.0 01:01:14.423 SYMLINK libspdk_bdev_virtio.so 01:01:15.022 LIB libspdk_bdev_nvme.a 01:01:15.022 SO libspdk_bdev_nvme.so.7.0 01:01:15.320 SYMLINK libspdk_bdev_nvme.so 01:01:15.913 CC module/event/subsystems/keyring/keyring.o 01:01:15.913 CC module/event/subsystems/vhost_blk/vhost_blk.o 01:01:15.913 CC module/event/subsystems/vmd/vmd_rpc.o 01:01:15.913 CC module/event/subsystems/vmd/vmd.o 01:01:15.913 CC module/event/subsystems/iobuf/iobuf.o 01:01:15.913 CC module/event/subsystems/iobuf/iobuf_rpc.o 01:01:15.913 CC module/event/subsystems/sock/sock.o 01:01:15.913 CC module/event/subsystems/scheduler/scheduler.o 01:01:15.913 LIB libspdk_event_keyring.a 01:01:15.913 LIB libspdk_event_vhost_blk.a 01:01:15.913 LIB libspdk_event_vmd.a 01:01:15.913 LIB libspdk_event_sock.a 01:01:15.913 LIB libspdk_event_scheduler.a 01:01:15.913 LIB libspdk_event_iobuf.a 01:01:15.913 SO libspdk_event_keyring.so.1.0 01:01:15.913 SO libspdk_event_vhost_blk.so.3.0 01:01:15.913 SO libspdk_event_vmd.so.6.0 01:01:15.913 SO libspdk_event_scheduler.so.4.0 01:01:15.913 SO libspdk_event_sock.so.5.0 01:01:15.913 SO libspdk_event_iobuf.so.3.0 01:01:16.171 SYMLINK libspdk_event_keyring.so 01:01:16.171 SYMLINK libspdk_event_vhost_blk.so 01:01:16.171 SYMLINK libspdk_event_scheduler.so 01:01:16.171 SYMLINK libspdk_event_vmd.so 01:01:16.171 SYMLINK libspdk_event_sock.so 01:01:16.171 SYMLINK libspdk_event_iobuf.so 01:01:16.429 CC module/event/subsystems/accel/accel.o 01:01:16.688 LIB libspdk_event_accel.a 01:01:16.688 SO libspdk_event_accel.so.6.0 01:01:16.688 SYMLINK libspdk_event_accel.so 01:01:17.254 CC module/event/subsystems/bdev/bdev.o 01:01:17.254 LIB libspdk_event_bdev.a 01:01:17.513 SO libspdk_event_bdev.so.6.0 01:01:17.513 SYMLINK libspdk_event_bdev.so 01:01:17.771 CC module/event/subsystems/nbd/nbd.o 01:01:17.771 CC module/event/subsystems/scsi/scsi.o 01:01:17.771 CC module/event/subsystems/ublk/ublk.o 01:01:17.771 CC module/event/subsystems/nvmf/nvmf_rpc.o 01:01:17.771 CC module/event/subsystems/nvmf/nvmf_tgt.o 01:01:18.028 LIB libspdk_event_nbd.a 01:01:18.028 LIB libspdk_event_scsi.a 01:01:18.028 LIB libspdk_event_ublk.a 01:01:18.028 SO libspdk_event_nbd.so.6.0 01:01:18.028 SO libspdk_event_scsi.so.6.0 01:01:18.028 SO libspdk_event_ublk.so.3.0 01:01:18.028 SYMLINK libspdk_event_scsi.so 01:01:18.028 SYMLINK libspdk_event_nbd.so 01:01:18.028 LIB libspdk_event_nvmf.a 01:01:18.028 SYMLINK libspdk_event_ublk.so 01:01:18.028 SO libspdk_event_nvmf.so.6.0 01:01:18.285 SYMLINK libspdk_event_nvmf.so 01:01:18.285 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 01:01:18.285 CC module/event/subsystems/iscsi/iscsi.o 01:01:18.542 LIB libspdk_event_vhost_scsi.a 01:01:18.542 LIB libspdk_event_iscsi.a 01:01:18.542 SO libspdk_event_vhost_scsi.so.3.0 01:01:18.542 SO libspdk_event_iscsi.so.6.0 01:01:18.542 SYMLINK libspdk_event_vhost_scsi.so 01:01:18.800 SYMLINK libspdk_event_iscsi.so 01:01:18.800 SO libspdk.so.6.0 01:01:19.058 SYMLINK libspdk.so 01:01:19.317 CXX app/trace/trace.o 01:01:19.317 CC app/trace_record/trace_record.o 01:01:19.317 TEST_HEADER include/spdk/accel.h 01:01:19.317 TEST_HEADER include/spdk/accel_module.h 01:01:19.317 TEST_HEADER include/spdk/assert.h 01:01:19.317 TEST_HEADER include/spdk/barrier.h 01:01:19.317 TEST_HEADER include/spdk/base64.h 01:01:19.317 TEST_HEADER include/spdk/bdev.h 01:01:19.317 TEST_HEADER include/spdk/bdev_module.h 01:01:19.317 TEST_HEADER include/spdk/bdev_zone.h 01:01:19.317 TEST_HEADER include/spdk/bit_array.h 01:01:19.317 CC app/nvmf_tgt/nvmf_main.o 01:01:19.317 TEST_HEADER include/spdk/bit_pool.h 01:01:19.317 TEST_HEADER include/spdk/blob_bdev.h 01:01:19.317 TEST_HEADER include/spdk/blobfs_bdev.h 01:01:19.317 TEST_HEADER include/spdk/blobfs.h 01:01:19.317 TEST_HEADER include/spdk/blob.h 01:01:19.317 TEST_HEADER include/spdk/conf.h 01:01:19.317 TEST_HEADER include/spdk/config.h 01:01:19.317 TEST_HEADER include/spdk/cpuset.h 01:01:19.317 TEST_HEADER include/spdk/crc16.h 01:01:19.317 CC examples/interrupt_tgt/interrupt_tgt.o 01:01:19.317 TEST_HEADER include/spdk/crc32.h 01:01:19.317 TEST_HEADER include/spdk/crc64.h 01:01:19.317 TEST_HEADER include/spdk/dif.h 01:01:19.317 TEST_HEADER include/spdk/dma.h 01:01:19.317 TEST_HEADER include/spdk/endian.h 01:01:19.317 TEST_HEADER include/spdk/env_dpdk.h 01:01:19.317 TEST_HEADER include/spdk/env.h 01:01:19.317 TEST_HEADER include/spdk/event.h 01:01:19.317 TEST_HEADER include/spdk/fd_group.h 01:01:19.317 TEST_HEADER include/spdk/fd.h 01:01:19.317 TEST_HEADER include/spdk/file.h 01:01:19.317 TEST_HEADER include/spdk/ftl.h 01:01:19.317 TEST_HEADER include/spdk/gpt_spec.h 01:01:19.317 TEST_HEADER include/spdk/hexlify.h 01:01:19.317 TEST_HEADER include/spdk/histogram_data.h 01:01:19.317 CC examples/ioat/perf/perf.o 01:01:19.317 TEST_HEADER include/spdk/idxd.h 01:01:19.317 TEST_HEADER include/spdk/idxd_spec.h 01:01:19.317 TEST_HEADER include/spdk/init.h 01:01:19.317 TEST_HEADER include/spdk/ioat.h 01:01:19.317 CC examples/util/zipf/zipf.o 01:01:19.317 TEST_HEADER include/spdk/ioat_spec.h 01:01:19.317 TEST_HEADER include/spdk/iscsi_spec.h 01:01:19.317 TEST_HEADER include/spdk/json.h 01:01:19.317 TEST_HEADER include/spdk/jsonrpc.h 01:01:19.317 TEST_HEADER include/spdk/keyring.h 01:01:19.317 TEST_HEADER include/spdk/keyring_module.h 01:01:19.317 CC test/thread/poller_perf/poller_perf.o 01:01:19.317 TEST_HEADER include/spdk/likely.h 01:01:19.317 TEST_HEADER include/spdk/log.h 01:01:19.317 CC test/dma/test_dma/test_dma.o 01:01:19.317 TEST_HEADER include/spdk/lvol.h 01:01:19.317 TEST_HEADER include/spdk/memory.h 01:01:19.317 CC test/app/bdev_svc/bdev_svc.o 01:01:19.317 TEST_HEADER include/spdk/mmio.h 01:01:19.317 TEST_HEADER include/spdk/nbd.h 01:01:19.317 TEST_HEADER include/spdk/net.h 01:01:19.317 TEST_HEADER include/spdk/notify.h 01:01:19.317 TEST_HEADER include/spdk/nvme.h 01:01:19.317 TEST_HEADER include/spdk/nvme_intel.h 01:01:19.317 TEST_HEADER include/spdk/nvme_ocssd.h 01:01:19.317 TEST_HEADER include/spdk/nvme_ocssd_spec.h 01:01:19.317 TEST_HEADER include/spdk/nvme_spec.h 01:01:19.317 TEST_HEADER include/spdk/nvme_zns.h 01:01:19.317 TEST_HEADER include/spdk/nvmf_cmd.h 01:01:19.317 TEST_HEADER include/spdk/nvmf_fc_spec.h 01:01:19.317 TEST_HEADER include/spdk/nvmf.h 01:01:19.317 TEST_HEADER include/spdk/nvmf_spec.h 01:01:19.317 TEST_HEADER include/spdk/nvmf_transport.h 01:01:19.317 TEST_HEADER include/spdk/opal.h 01:01:19.317 TEST_HEADER include/spdk/opal_spec.h 01:01:19.317 TEST_HEADER include/spdk/pci_ids.h 01:01:19.317 TEST_HEADER include/spdk/pipe.h 01:01:19.317 TEST_HEADER include/spdk/queue.h 01:01:19.317 TEST_HEADER include/spdk/reduce.h 01:01:19.317 TEST_HEADER include/spdk/rpc.h 01:01:19.317 TEST_HEADER include/spdk/scheduler.h 01:01:19.317 TEST_HEADER include/spdk/scsi.h 01:01:19.317 TEST_HEADER include/spdk/scsi_spec.h 01:01:19.317 TEST_HEADER include/spdk/sock.h 01:01:19.317 TEST_HEADER include/spdk/stdinc.h 01:01:19.317 TEST_HEADER include/spdk/string.h 01:01:19.317 TEST_HEADER include/spdk/thread.h 01:01:19.317 TEST_HEADER include/spdk/trace.h 01:01:19.317 TEST_HEADER include/spdk/trace_parser.h 01:01:19.317 TEST_HEADER include/spdk/tree.h 01:01:19.317 TEST_HEADER include/spdk/ublk.h 01:01:19.317 TEST_HEADER include/spdk/util.h 01:01:19.317 TEST_HEADER include/spdk/uuid.h 01:01:19.317 TEST_HEADER include/spdk/version.h 01:01:19.317 LINK nvmf_tgt 01:01:19.317 TEST_HEADER include/spdk/vfio_user_pci.h 01:01:19.317 TEST_HEADER include/spdk/vfio_user_spec.h 01:01:19.578 TEST_HEADER include/spdk/vhost.h 01:01:19.578 TEST_HEADER include/spdk/vmd.h 01:01:19.578 TEST_HEADER include/spdk/xor.h 01:01:19.578 TEST_HEADER include/spdk/zipf.h 01:01:19.578 CXX test/cpp_headers/accel.o 01:01:19.578 LINK interrupt_tgt 01:01:19.578 LINK zipf 01:01:19.578 LINK spdk_trace_record 01:01:19.578 LINK poller_perf 01:01:19.578 LINK bdev_svc 01:01:19.578 LINK ioat_perf 01:01:19.578 LINK spdk_trace 01:01:19.578 CXX test/cpp_headers/accel_module.o 01:01:19.578 CXX test/cpp_headers/assert.o 01:01:19.578 CXX test/cpp_headers/barrier.o 01:01:19.578 CXX test/cpp_headers/base64.o 01:01:19.578 CXX test/cpp_headers/bdev.o 01:01:19.578 CXX test/cpp_headers/bdev_module.o 01:01:19.578 LINK test_dma 01:01:19.838 CXX test/cpp_headers/bdev_zone.o 01:01:19.838 CC examples/ioat/verify/verify.o 01:01:19.838 CXX test/cpp_headers/bit_array.o 01:01:19.838 CC test/app/histogram_perf/histogram_perf.o 01:01:19.838 CC app/iscsi_tgt/iscsi_tgt.o 01:01:19.838 CC test/app/jsoncat/jsoncat.o 01:01:19.838 CC test/app/stub/stub.o 01:01:19.838 CXX test/cpp_headers/bit_pool.o 01:01:19.838 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 01:01:20.095 LINK histogram_perf 01:01:20.095 LINK verify 01:01:20.095 LINK jsoncat 01:01:20.095 CC examples/thread/thread/thread_ex.o 01:01:20.095 CXX test/cpp_headers/blob_bdev.o 01:01:20.095 LINK iscsi_tgt 01:01:20.095 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 01:01:20.095 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 01:01:20.095 LINK stub 01:01:20.095 CXX test/cpp_headers/blobfs_bdev.o 01:01:20.095 CXX test/cpp_headers/blobfs.o 01:01:20.095 CXX test/cpp_headers/blob.o 01:01:20.095 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 01:01:20.095 LINK thread 01:01:20.353 LINK nvme_fuzz 01:01:20.353 CXX test/cpp_headers/conf.o 01:01:20.353 CC app/spdk_lspci/spdk_lspci.o 01:01:20.353 CC app/spdk_nvme_perf/perf.o 01:01:20.353 CC test/event/event_perf/event_perf.o 01:01:20.353 CC app/spdk_tgt/spdk_tgt.o 01:01:20.353 CXX test/cpp_headers/config.o 01:01:20.353 CC test/env/mem_callbacks/mem_callbacks.o 01:01:20.353 CXX test/cpp_headers/cpuset.o 01:01:20.611 LINK spdk_lspci 01:01:20.611 LINK event_perf 01:01:20.611 LINK vhost_fuzz 01:01:20.611 CC examples/vmd/lsvmd/lsvmd.o 01:01:20.611 CC examples/sock/hello_world/hello_sock.o 01:01:20.611 LINK spdk_tgt 01:01:20.611 CXX test/cpp_headers/crc16.o 01:01:20.611 LINK lsvmd 01:01:20.867 CC test/event/reactor/reactor.o 01:01:20.867 CC test/event/reactor_perf/reactor_perf.o 01:01:20.867 CC test/env/vtophys/vtophys.o 01:01:20.867 CXX test/cpp_headers/crc32.o 01:01:20.867 LINK hello_sock 01:01:20.867 CXX test/cpp_headers/crc64.o 01:01:20.867 LINK reactor 01:01:20.867 LINK reactor_perf 01:01:20.867 LINK vtophys 01:01:20.867 CXX test/cpp_headers/dif.o 01:01:20.867 LINK mem_callbacks 01:01:20.867 CC examples/vmd/led/led.o 01:01:21.125 CC app/spdk_nvme_identify/identify.o 01:01:21.125 CC app/spdk_nvme_discover/discovery_aer.o 01:01:21.125 CXX test/cpp_headers/dma.o 01:01:21.125 LINK led 01:01:21.125 CC app/spdk_top/spdk_top.o 01:01:21.125 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 01:01:21.125 LINK spdk_nvme_perf 01:01:21.125 CC test/event/app_repeat/app_repeat.o 01:01:21.125 LINK spdk_nvme_discover 01:01:21.397 CXX test/cpp_headers/endian.o 01:01:21.397 CC test/nvme/aer/aer.o 01:01:21.397 LINK env_dpdk_post_init 01:01:21.397 LINK app_repeat 01:01:21.397 CXX test/cpp_headers/env_dpdk.o 01:01:21.397 LINK iscsi_fuzz 01:01:21.397 CC app/vhost/vhost.o 01:01:21.397 CC examples/idxd/perf/perf.o 01:01:21.655 CC test/env/memory/memory_ut.o 01:01:21.655 CXX test/cpp_headers/env.o 01:01:21.655 LINK aer 01:01:21.655 CC app/spdk_dd/spdk_dd.o 01:01:21.655 LINK vhost 01:01:21.655 CC test/event/scheduler/scheduler.o 01:01:21.655 CXX test/cpp_headers/event.o 01:01:21.655 LINK spdk_nvme_identify 01:01:21.913 LINK idxd_perf 01:01:21.913 CC test/nvme/reset/reset.o 01:01:21.913 CC app/fio/nvme/fio_plugin.o 01:01:21.913 CXX test/cpp_headers/fd_group.o 01:01:21.913 CXX test/cpp_headers/fd.o 01:01:21.913 CXX test/cpp_headers/file.o 01:01:21.913 LINK scheduler 01:01:21.913 LINK spdk_top 01:01:22.172 LINK spdk_dd 01:01:22.172 CXX test/cpp_headers/ftl.o 01:01:22.172 LINK reset 01:01:22.172 CC test/nvme/sgl/sgl.o 01:01:22.172 CC examples/accel/perf/accel_perf.o 01:01:22.172 CC test/nvme/e2edp/nvme_dp.o 01:01:22.172 CC test/nvme/overhead/overhead.o 01:01:22.172 CC test/nvme/err_injection/err_injection.o 01:01:22.172 CXX test/cpp_headers/gpt_spec.o 01:01:22.430 CXX test/cpp_headers/hexlify.o 01:01:22.430 LINK spdk_nvme 01:01:22.430 LINK nvme_dp 01:01:22.430 LINK sgl 01:01:22.430 CXX test/cpp_headers/histogram_data.o 01:01:22.430 LINK err_injection 01:01:22.430 LINK overhead 01:01:22.430 CC examples/blob/hello_world/hello_blob.o 01:01:22.430 LINK memory_ut 01:01:22.688 LINK accel_perf 01:01:22.688 CXX test/cpp_headers/idxd.o 01:01:22.688 CC examples/nvme/hello_world/hello_world.o 01:01:22.688 CXX test/cpp_headers/idxd_spec.o 01:01:22.688 CC app/fio/bdev/fio_plugin.o 01:01:22.688 CC test/nvme/startup/startup.o 01:01:22.688 CC examples/blob/cli/blobcli.o 01:01:22.688 CC test/env/pci/pci_ut.o 01:01:22.688 LINK hello_blob 01:01:22.688 CXX test/cpp_headers/init.o 01:01:22.688 CXX test/cpp_headers/ioat.o 01:01:22.945 CC test/nvme/reserve/reserve.o 01:01:22.945 LINK startup 01:01:22.945 LINK hello_world 01:01:22.945 CXX test/cpp_headers/ioat_spec.o 01:01:22.945 CC examples/bdev/hello_world/hello_bdev.o 01:01:22.945 CC test/nvme/simple_copy/simple_copy.o 01:01:22.945 LINK reserve 01:01:22.945 CC test/nvme/connect_stress/connect_stress.o 01:01:23.203 LINK pci_ut 01:01:23.203 LINK spdk_bdev 01:01:23.203 CC examples/bdev/bdevperf/bdevperf.o 01:01:23.203 CC examples/nvme/reconnect/reconnect.o 01:01:23.203 LINK blobcli 01:01:23.203 CXX test/cpp_headers/iscsi_spec.o 01:01:23.203 CXX test/cpp_headers/json.o 01:01:23.203 LINK connect_stress 01:01:23.203 CXX test/cpp_headers/jsonrpc.o 01:01:23.203 LINK hello_bdev 01:01:23.203 LINK simple_copy 01:01:23.462 CXX test/cpp_headers/keyring.o 01:01:23.462 CXX test/cpp_headers/keyring_module.o 01:01:23.462 CXX test/cpp_headers/likely.o 01:01:23.462 CC test/nvme/boot_partition/boot_partition.o 01:01:23.462 CXX test/cpp_headers/log.o 01:01:23.462 CXX test/cpp_headers/lvol.o 01:01:23.462 CC test/nvme/compliance/nvme_compliance.o 01:01:23.462 LINK reconnect 01:01:23.462 CC test/nvme/fused_ordering/fused_ordering.o 01:01:23.462 CXX test/cpp_headers/memory.o 01:01:23.462 LINK boot_partition 01:01:23.462 CXX test/cpp_headers/mmio.o 01:01:23.719 CXX test/cpp_headers/nbd.o 01:01:23.719 CC test/nvme/doorbell_aers/doorbell_aers.o 01:01:23.719 CC test/nvme/fdp/fdp.o 01:01:23.719 CXX test/cpp_headers/net.o 01:01:23.719 LINK fused_ordering 01:01:23.719 CXX test/cpp_headers/notify.o 01:01:23.719 LINK nvme_compliance 01:01:23.719 LINK bdevperf 01:01:23.719 CC examples/nvme/nvme_manage/nvme_manage.o 01:01:23.719 CXX test/cpp_headers/nvme.o 01:01:23.719 CC examples/nvme/arbitration/arbitration.o 01:01:23.719 CC examples/nvme/hotplug/hotplug.o 01:01:23.719 LINK doorbell_aers 01:01:23.977 CXX test/cpp_headers/nvme_intel.o 01:01:23.977 LINK fdp 01:01:23.977 CXX test/cpp_headers/nvme_ocssd.o 01:01:23.977 CC examples/nvme/cmb_copy/cmb_copy.o 01:01:23.977 CXX test/cpp_headers/nvme_ocssd_spec.o 01:01:23.977 CXX test/cpp_headers/nvme_spec.o 01:01:23.977 LINK hotplug 01:01:23.977 CC test/rpc_client/rpc_client_test.o 01:01:24.235 LINK arbitration 01:01:24.235 LINK cmb_copy 01:01:24.235 CC test/nvme/cuse/cuse.o 01:01:24.235 CXX test/cpp_headers/nvme_zns.o 01:01:24.235 CC test/accel/dif/dif.o 01:01:24.235 LINK nvme_manage 01:01:24.235 LINK rpc_client_test 01:01:24.235 CC examples/nvme/abort/abort.o 01:01:24.235 CXX test/cpp_headers/nvmf_cmd.o 01:01:24.492 CC test/blobfs/mkfs/mkfs.o 01:01:24.492 CXX test/cpp_headers/nvmf_fc_spec.o 01:01:24.492 CXX test/cpp_headers/nvmf.o 01:01:24.492 CXX test/cpp_headers/nvmf_spec.o 01:01:24.492 CC examples/nvme/pmr_persistence/pmr_persistence.o 01:01:24.492 CC test/lvol/esnap/esnap.o 01:01:24.492 CXX test/cpp_headers/nvmf_transport.o 01:01:24.492 CXX test/cpp_headers/opal.o 01:01:24.492 LINK mkfs 01:01:24.492 CXX test/cpp_headers/opal_spec.o 01:01:24.492 LINK dif 01:01:24.492 CXX test/cpp_headers/pci_ids.o 01:01:24.750 LINK pmr_persistence 01:01:24.750 LINK abort 01:01:24.750 CXX test/cpp_headers/pipe.o 01:01:24.750 CXX test/cpp_headers/queue.o 01:01:24.750 CXX test/cpp_headers/reduce.o 01:01:24.750 CXX test/cpp_headers/rpc.o 01:01:24.750 CXX test/cpp_headers/scheduler.o 01:01:24.750 CXX test/cpp_headers/scsi.o 01:01:24.750 CXX test/cpp_headers/scsi_spec.o 01:01:25.007 CXX test/cpp_headers/sock.o 01:01:25.007 CXX test/cpp_headers/stdinc.o 01:01:25.007 CXX test/cpp_headers/string.o 01:01:25.007 CXX test/cpp_headers/thread.o 01:01:25.007 CXX test/cpp_headers/trace.o 01:01:25.007 CXX test/cpp_headers/trace_parser.o 01:01:25.007 CC test/bdev/bdevio/bdevio.o 01:01:25.007 CC examples/nvmf/nvmf/nvmf.o 01:01:25.007 CXX test/cpp_headers/tree.o 01:01:25.007 CXX test/cpp_headers/ublk.o 01:01:25.007 CXX test/cpp_headers/util.o 01:01:25.007 CXX test/cpp_headers/uuid.o 01:01:25.007 CXX test/cpp_headers/version.o 01:01:25.007 CXX test/cpp_headers/vfio_user_pci.o 01:01:25.265 CXX test/cpp_headers/vfio_user_spec.o 01:01:25.265 CXX test/cpp_headers/vhost.o 01:01:25.265 CXX test/cpp_headers/vmd.o 01:01:25.265 CXX test/cpp_headers/xor.o 01:01:25.265 CXX test/cpp_headers/zipf.o 01:01:25.265 LINK nvmf 01:01:25.523 LINK bdevio 01:01:25.523 LINK cuse 01:01:28.895 LINK esnap 01:01:29.462 ************************************ 01:01:29.462 END TEST make 01:01:29.462 ************************************ 01:01:29.462 01:01:29.462 real 0m52.008s 01:01:29.462 user 4m7.766s 01:01:29.462 sys 1m11.096s 01:01:29.462 10:58:34 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 01:01:29.462 10:58:34 make -- common/autotest_common.sh@10 -- $ set +x 01:01:29.462 10:58:34 -- common/autotest_common.sh@1142 -- $ return 0 01:01:29.462 10:58:34 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 01:01:29.462 10:58:34 -- pm/common@29 -- $ signal_monitor_resources TERM 01:01:29.462 10:58:34 -- pm/common@40 -- $ local monitor pid pids signal=TERM 01:01:29.462 10:58:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:01:29.462 10:58:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 01:01:29.462 10:58:34 -- pm/common@44 -- $ pid=5887 01:01:29.462 10:58:34 -- pm/common@50 -- $ kill -TERM 5887 01:01:29.462 10:58:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:01:29.462 10:58:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 01:01:29.462 10:58:34 -- pm/common@44 -- $ pid=5889 01:01:29.462 10:58:34 -- pm/common@50 -- $ kill -TERM 5889 01:01:29.462 10:58:34 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:01:29.462 10:58:34 -- nvmf/common.sh@7 -- # uname -s 01:01:29.462 10:58:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:01:29.462 10:58:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:01:29.462 10:58:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:01:29.462 10:58:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:01:29.462 10:58:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:01:29.462 10:58:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:01:29.462 10:58:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:01:29.462 10:58:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:01:29.462 10:58:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:01:29.462 10:58:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:01:29.462 10:58:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:01:29.462 10:58:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:01:29.462 10:58:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:01:29.462 10:58:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:01:29.462 10:58:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:01:29.462 10:58:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:01:29.462 10:58:34 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:01:29.462 10:58:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:01:29.462 10:58:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:01:29.462 10:58:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:01:29.462 10:58:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:29.462 10:58:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:29.462 10:58:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:29.462 10:58:34 -- paths/export.sh@5 -- # export PATH 01:01:29.462 10:58:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:29.462 10:58:34 -- nvmf/common.sh@47 -- # : 0 01:01:29.462 10:58:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:01:29.462 10:58:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:01:29.462 10:58:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:01:29.462 10:58:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:01:29.462 10:58:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:01:29.462 10:58:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:01:29.462 10:58:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:01:29.462 10:58:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 01:01:29.462 10:58:34 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 01:01:29.462 10:58:34 -- spdk/autotest.sh@32 -- # uname -s 01:01:29.462 10:58:34 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 01:01:29.463 10:58:34 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 01:01:29.463 10:58:34 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 01:01:29.463 10:58:34 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 01:01:29.463 10:58:34 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 01:01:29.463 10:58:34 -- spdk/autotest.sh@44 -- # modprobe nbd 01:01:29.722 10:58:34 -- spdk/autotest.sh@46 -- # type -P udevadm 01:01:29.722 10:58:34 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 01:01:29.722 10:58:34 -- spdk/autotest.sh@48 -- # udevadm_pid=65241 01:01:29.722 10:58:34 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 01:01:29.722 10:58:34 -- spdk/autotest.sh@53 -- # start_monitor_resources 01:01:29.722 10:58:34 -- pm/common@17 -- # local monitor 01:01:29.722 10:58:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 01:01:29.722 10:58:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 01:01:29.722 10:58:34 -- pm/common@25 -- # sleep 1 01:01:29.722 10:58:34 -- pm/common@21 -- # date +%s 01:01:29.722 10:58:34 -- pm/common@21 -- # date +%s 01:01:29.722 10:58:34 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721645914 01:01:29.722 10:58:34 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721645914 01:01:29.722 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721645914_collect-vmstat.pm.log 01:01:29.722 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721645914_collect-cpu-load.pm.log 01:01:30.658 10:58:35 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 01:01:30.658 10:58:35 -- spdk/autotest.sh@57 -- # timing_enter autotest 01:01:30.658 10:58:35 -- common/autotest_common.sh@722 -- # xtrace_disable 01:01:30.658 10:58:35 -- common/autotest_common.sh@10 -- # set +x 01:01:30.658 10:58:35 -- spdk/autotest.sh@59 -- # create_test_list 01:01:30.658 10:58:35 -- common/autotest_common.sh@746 -- # xtrace_disable 01:01:30.658 10:58:35 -- common/autotest_common.sh@10 -- # set +x 01:01:30.658 10:58:35 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 01:01:30.658 10:58:35 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 01:01:30.658 10:58:35 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 01:01:30.658 10:58:35 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 01:01:30.658 10:58:35 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 01:01:30.658 10:58:35 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 01:01:30.658 10:58:35 -- common/autotest_common.sh@1455 -- # uname 01:01:30.658 10:58:35 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 01:01:30.658 10:58:35 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 01:01:30.659 10:58:35 -- common/autotest_common.sh@1475 -- # uname 01:01:30.659 10:58:35 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 01:01:30.659 10:58:35 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 01:01:30.659 10:58:35 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 01:01:30.659 10:58:35 -- spdk/autotest.sh@72 -- # hash lcov 01:01:30.659 10:58:35 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 01:01:30.659 10:58:35 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 01:01:30.659 --rc lcov_branch_coverage=1 01:01:30.659 --rc lcov_function_coverage=1 01:01:30.659 --rc genhtml_branch_coverage=1 01:01:30.659 --rc genhtml_function_coverage=1 01:01:30.659 --rc genhtml_legend=1 01:01:30.659 --rc geninfo_all_blocks=1 01:01:30.659 ' 01:01:30.659 10:58:35 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 01:01:30.659 --rc lcov_branch_coverage=1 01:01:30.659 --rc lcov_function_coverage=1 01:01:30.659 --rc genhtml_branch_coverage=1 01:01:30.659 --rc genhtml_function_coverage=1 01:01:30.659 --rc genhtml_legend=1 01:01:30.659 --rc geninfo_all_blocks=1 01:01:30.659 ' 01:01:30.659 10:58:35 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 01:01:30.659 --rc lcov_branch_coverage=1 01:01:30.659 --rc lcov_function_coverage=1 01:01:30.659 --rc genhtml_branch_coverage=1 01:01:30.659 --rc genhtml_function_coverage=1 01:01:30.659 --rc genhtml_legend=1 01:01:30.659 --rc geninfo_all_blocks=1 01:01:30.659 --no-external' 01:01:30.659 10:58:35 -- spdk/autotest.sh@81 -- # LCOV='lcov 01:01:30.659 --rc lcov_branch_coverage=1 01:01:30.659 --rc lcov_function_coverage=1 01:01:30.659 --rc genhtml_branch_coverage=1 01:01:30.659 --rc genhtml_function_coverage=1 01:01:30.659 --rc genhtml_legend=1 01:01:30.659 --rc geninfo_all_blocks=1 01:01:30.659 --no-external' 01:01:30.659 10:58:35 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 01:01:30.917 lcov: LCOV version 1.14 01:01:30.917 10:58:35 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 01:01:45.792 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 01:01:45.792 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 01:01:58.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 01:01:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 01:01:58.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 01:01:58.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 01:02:00.544 10:59:05 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 01:02:00.544 10:59:05 -- common/autotest_common.sh@722 -- # xtrace_disable 01:02:00.544 10:59:05 -- common/autotest_common.sh@10 -- # set +x 01:02:00.544 10:59:05 -- spdk/autotest.sh@91 -- # rm -f 01:02:00.544 10:59:05 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:02:01.109 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:02:01.368 0000:00:11.0 (1b36 0010): Already using the nvme driver 01:02:01.368 0000:00:10.0 (1b36 0010): Already using the nvme driver 01:02:01.368 10:59:06 -- spdk/autotest.sh@96 -- # get_zoned_devs 01:02:01.368 10:59:06 -- common/autotest_common.sh@1669 -- # zoned_devs=() 01:02:01.368 10:59:06 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 01:02:01.368 10:59:06 -- common/autotest_common.sh@1670 -- # local nvme bdf 01:02:01.368 10:59:06 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 01:02:01.368 10:59:06 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 01:02:01.368 10:59:06 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 01:02:01.368 10:59:06 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:02:01.368 10:59:06 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:02:01.368 10:59:06 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 01:02:01.368 10:59:06 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 01:02:01.368 10:59:06 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 01:02:01.368 10:59:06 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:02:01.368 10:59:06 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:02:01.368 10:59:06 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 01:02:01.368 10:59:06 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 01:02:01.368 10:59:06 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 01:02:01.368 10:59:06 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 01:02:01.368 10:59:06 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:02:01.368 10:59:06 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 01:02:01.368 10:59:06 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 01:02:01.368 10:59:06 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 01:02:01.368 10:59:06 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 01:02:01.368 10:59:06 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:02:01.368 10:59:06 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 01:02:01.368 10:59:06 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 01:02:01.368 10:59:06 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 01:02:01.368 10:59:06 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 01:02:01.368 10:59:06 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 01:02:01.368 10:59:06 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 01:02:01.368 No valid GPT data, bailing 01:02:01.368 10:59:06 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:02:01.368 10:59:06 -- scripts/common.sh@391 -- # pt= 01:02:01.368 10:59:06 -- scripts/common.sh@392 -- # return 1 01:02:01.368 10:59:06 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 01:02:01.368 1+0 records in 01:02:01.368 1+0 records out 01:02:01.368 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00596231 s, 176 MB/s 01:02:01.368 10:59:06 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 01:02:01.368 10:59:06 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 01:02:01.368 10:59:06 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 01:02:01.368 10:59:06 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 01:02:01.368 10:59:06 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 01:02:01.368 No valid GPT data, bailing 01:02:01.368 10:59:06 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 01:02:01.368 10:59:06 -- scripts/common.sh@391 -- # pt= 01:02:01.368 10:59:06 -- scripts/common.sh@392 -- # return 1 01:02:01.368 10:59:06 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 01:02:01.368 1+0 records in 01:02:01.368 1+0 records out 01:02:01.368 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00401199 s, 261 MB/s 01:02:01.368 10:59:06 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 01:02:01.368 10:59:06 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 01:02:01.368 10:59:06 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 01:02:01.368 10:59:06 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 01:02:01.368 10:59:06 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 01:02:01.626 No valid GPT data, bailing 01:02:01.626 10:59:06 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 01:02:01.626 10:59:06 -- scripts/common.sh@391 -- # pt= 01:02:01.626 10:59:06 -- scripts/common.sh@392 -- # return 1 01:02:01.626 10:59:06 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 01:02:01.626 1+0 records in 01:02:01.626 1+0 records out 01:02:01.626 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00595549 s, 176 MB/s 01:02:01.626 10:59:06 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 01:02:01.626 10:59:06 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 01:02:01.626 10:59:06 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 01:02:01.626 10:59:06 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 01:02:01.626 10:59:06 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 01:02:01.626 No valid GPT data, bailing 01:02:01.626 10:59:06 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 01:02:01.626 10:59:06 -- scripts/common.sh@391 -- # pt= 01:02:01.626 10:59:06 -- scripts/common.sh@392 -- # return 1 01:02:01.626 10:59:06 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 01:02:01.626 1+0 records in 01:02:01.626 1+0 records out 01:02:01.626 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00422269 s, 248 MB/s 01:02:01.626 10:59:06 -- spdk/autotest.sh@118 -- # sync 01:02:01.626 10:59:06 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 01:02:01.626 10:59:06 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 01:02:01.626 10:59:06 -- common/autotest_common.sh@22 -- # reap_spdk_processes 01:02:04.153 10:59:09 -- spdk/autotest.sh@124 -- # uname -s 01:02:04.411 10:59:09 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 01:02:04.412 10:59:09 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 01:02:04.412 10:59:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:04.412 10:59:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:04.412 10:59:09 -- common/autotest_common.sh@10 -- # set +x 01:02:04.412 ************************************ 01:02:04.412 START TEST setup.sh 01:02:04.412 ************************************ 01:02:04.412 10:59:09 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 01:02:04.412 * Looking for test storage... 01:02:04.412 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 01:02:04.412 10:59:09 setup.sh -- setup/test-setup.sh@10 -- # uname -s 01:02:04.412 10:59:09 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 01:02:04.412 10:59:09 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 01:02:04.412 10:59:09 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:04.412 10:59:09 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:04.412 10:59:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 01:02:04.412 ************************************ 01:02:04.412 START TEST acl 01:02:04.412 ************************************ 01:02:04.412 10:59:09 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 01:02:04.669 * Looking for test storage... 01:02:04.669 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 01:02:04.669 10:59:09 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 01:02:04.669 10:59:09 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 01:02:04.669 10:59:09 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 01:02:04.669 10:59:09 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 01:02:04.669 10:59:09 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 01:02:04.669 10:59:09 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 01:02:04.670 10:59:09 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 01:02:04.670 10:59:09 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:02:04.670 10:59:09 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:02:04.670 10:59:09 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 01:02:04.670 10:59:09 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 01:02:04.670 10:59:09 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 01:02:04.670 10:59:09 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:02:04.670 10:59:09 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:02:04.670 10:59:09 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 01:02:04.670 10:59:09 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 01:02:04.670 10:59:09 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 01:02:04.670 10:59:09 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 01:02:04.670 10:59:09 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:02:04.670 10:59:09 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 01:02:04.670 10:59:09 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 01:02:04.670 10:59:09 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 01:02:04.670 10:59:09 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 01:02:04.670 10:59:09 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:02:04.670 10:59:09 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 01:02:04.670 10:59:09 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 01:02:04.670 10:59:09 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 01:02:04.670 10:59:09 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 01:02:04.670 10:59:09 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 01:02:04.670 10:59:09 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 01:02:04.670 10:59:09 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:02:05.638 10:59:10 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 01:02:05.638 10:59:10 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 01:02:05.638 10:59:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 01:02:05.638 10:59:10 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 01:02:05.638 10:59:10 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 01:02:05.638 10:59:10 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 01:02:06.572 10:59:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 01:02:06.572 10:59:11 setup.sh.acl -- setup/acl.sh@19 -- # continue 01:02:06.572 10:59:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 01:02:06.572 Hugepages 01:02:06.572 node hugesize free / total 01:02:06.572 10:59:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 01:02:06.572 10:59:11 setup.sh.acl -- setup/acl.sh@19 -- # continue 01:02:06.572 10:59:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 01:02:06.572 01:02:06.572 Type BDF Vendor Device NUMA Driver Device Block devices 01:02:06.572 10:59:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 01:02:06.572 10:59:11 setup.sh.acl -- setup/acl.sh@19 -- # continue 01:02:06.572 10:59:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 01:02:06.572 10:59:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 01:02:06.572 10:59:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 01:02:06.572 10:59:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 01:02:06.572 10:59:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 01:02:06.572 10:59:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 01:02:06.572 10:59:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 01:02:06.572 10:59:11 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 01:02:06.572 10:59:11 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 01:02:06.572 10:59:11 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 01:02:06.572 10:59:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 01:02:06.830 10:59:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 01:02:06.830 10:59:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 01:02:06.830 10:59:11 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 01:02:06.830 10:59:11 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 01:02:06.830 10:59:11 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 01:02:06.830 10:59:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 01:02:06.830 10:59:11 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 01:02:06.830 10:59:11 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 01:02:06.830 10:59:11 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:06.830 10:59:11 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:06.830 10:59:11 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 01:02:06.830 ************************************ 01:02:06.830 START TEST denied 01:02:06.830 ************************************ 01:02:06.830 10:59:11 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 01:02:06.830 10:59:11 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 01:02:06.830 10:59:11 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 01:02:06.830 10:59:11 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 01:02:06.830 10:59:11 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 01:02:06.830 10:59:11 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 01:02:07.811 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 01:02:07.812 10:59:12 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 01:02:07.812 10:59:12 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 01:02:07.812 10:59:12 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 01:02:07.812 10:59:12 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 01:02:07.812 10:59:12 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 01:02:07.812 10:59:12 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 01:02:07.812 10:59:12 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 01:02:07.812 10:59:12 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 01:02:07.812 10:59:12 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 01:02:07.812 10:59:12 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:02:08.747 01:02:08.747 real 0m1.903s 01:02:08.747 user 0m0.676s 01:02:08.747 sys 0m1.192s 01:02:08.747 10:59:13 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:08.747 10:59:13 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 01:02:08.747 ************************************ 01:02:08.747 END TEST denied 01:02:08.747 ************************************ 01:02:08.747 10:59:13 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 01:02:08.747 10:59:13 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 01:02:08.747 10:59:13 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:08.747 10:59:13 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:08.747 10:59:13 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 01:02:08.747 ************************************ 01:02:08.747 START TEST allowed 01:02:08.747 ************************************ 01:02:08.747 10:59:13 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 01:02:08.747 10:59:13 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 01:02:08.747 10:59:13 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 01:02:08.747 10:59:13 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 01:02:08.747 10:59:13 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 01:02:08.747 10:59:13 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 01:02:09.680 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:02:09.680 10:59:14 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 01:02:09.680 10:59:14 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 01:02:09.680 10:59:14 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 01:02:09.680 10:59:14 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 01:02:09.680 10:59:14 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 01:02:09.680 10:59:14 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 01:02:09.680 10:59:14 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 01:02:09.680 10:59:14 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 01:02:09.680 10:59:14 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 01:02:09.680 10:59:14 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:02:10.613 01:02:10.613 real 0m1.967s 01:02:10.613 user 0m0.761s 01:02:10.613 sys 0m1.227s 01:02:10.613 10:59:15 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:10.613 ************************************ 01:02:10.613 END TEST allowed 01:02:10.613 ************************************ 01:02:10.613 10:59:15 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 01:02:10.613 10:59:15 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 01:02:10.613 01:02:10.613 real 0m6.280s 01:02:10.613 user 0m2.435s 01:02:10.613 sys 0m3.864s 01:02:10.613 10:59:15 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:10.613 10:59:15 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 01:02:10.613 ************************************ 01:02:10.613 END TEST acl 01:02:10.613 ************************************ 01:02:10.872 10:59:15 setup.sh -- common/autotest_common.sh@1142 -- # return 0 01:02:10.872 10:59:15 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 01:02:10.872 10:59:15 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:10.872 10:59:15 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:10.872 10:59:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 01:02:10.872 ************************************ 01:02:10.872 START TEST hugepages 01:02:10.872 ************************************ 01:02:10.872 10:59:15 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 01:02:10.872 * Looking for test storage... 01:02:10.872 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 4545800 kB' 'MemAvailable: 7369824 kB' 'Buffers: 2436 kB' 'Cached: 3027716 kB' 'SwapCached: 0 kB' 'Active: 439640 kB' 'Inactive: 2698776 kB' 'Active(anon): 118756 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698776 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 110036 kB' 'Mapped: 48832 kB' 'Shmem: 10492 kB' 'KReclaimable: 82604 kB' 'Slab: 163716 kB' 'SReclaimable: 82604 kB' 'SUnreclaim: 81112 kB' 'KernelStack: 6860 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 341788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55492 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.872 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.873 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 01:02:10.874 10:59:16 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 01:02:10.874 10:59:16 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:10.874 10:59:16 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:10.874 10:59:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 01:02:11.133 ************************************ 01:02:11.133 START TEST default_setup 01:02:11.133 ************************************ 01:02:11.133 10:59:16 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 01:02:11.133 10:59:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 01:02:11.133 10:59:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 01:02:11.133 10:59:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 01:02:11.133 10:59:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 01:02:11.133 10:59:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 01:02:11.133 10:59:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 01:02:11.133 10:59:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 01:02:11.133 10:59:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 01:02:11.133 10:59:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 01:02:11.133 10:59:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 01:02:11.133 10:59:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 01:02:11.133 10:59:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 01:02:11.133 10:59:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 01:02:11.133 10:59:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 01:02:11.133 10:59:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 01:02:11.133 10:59:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 01:02:11.133 10:59:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 01:02:11.133 10:59:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 01:02:11.133 10:59:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 01:02:11.133 10:59:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 01:02:11.133 10:59:16 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 01:02:11.133 10:59:16 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:02:11.699 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:02:11.957 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:02:11.957 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:02:11.957 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 01:02:11.957 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 01:02:11.957 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 01:02:11.957 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 01:02:11.957 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 01:02:11.957 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 01:02:11.957 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 01:02:11.957 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 01:02:11.957 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 01:02:11.957 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 01:02:11.957 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 01:02:11.957 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 01:02:11.957 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 01:02:11.957 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:11.957 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 01:02:11.957 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 01:02:11.957 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 01:02:11.957 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:11.957 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6646000 kB' 'MemAvailable: 9469868 kB' 'Buffers: 2436 kB' 'Cached: 3027704 kB' 'SwapCached: 0 kB' 'Active: 453372 kB' 'Inactive: 2698784 kB' 'Active(anon): 132488 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 123556 kB' 'Mapped: 48868 kB' 'Shmem: 10468 kB' 'KReclaimable: 82276 kB' 'Slab: 163296 kB' 'SReclaimable: 82276 kB' 'SUnreclaim: 81020 kB' 'KernelStack: 6752 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 355500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55524 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 01:02:11.957 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.957 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.957 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.957 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.958 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.959 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.959 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.959 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.959 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.959 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.959 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.959 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.959 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.959 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.959 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.959 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.959 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.959 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.959 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.959 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.959 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.959 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.959 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.959 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.959 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.959 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.959 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.959 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.959 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.959 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.959 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:11.959 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:11.959 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:11.959 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:11.959 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 01:02:11.959 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 01:02:11.959 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6646000 kB' 'MemAvailable: 9469868 kB' 'Buffers: 2436 kB' 'Cached: 3027704 kB' 'SwapCached: 0 kB' 'Active: 453132 kB' 'Inactive: 2698784 kB' 'Active(anon): 132248 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 123316 kB' 'Mapped: 48756 kB' 'Shmem: 10468 kB' 'KReclaimable: 82276 kB' 'Slab: 163292 kB' 'SReclaimable: 82276 kB' 'SUnreclaim: 81016 kB' 'KernelStack: 6768 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 355500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55508 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.220 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.221 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6645748 kB' 'MemAvailable: 9469616 kB' 'Buffers: 2436 kB' 'Cached: 3027704 kB' 'SwapCached: 0 kB' 'Active: 453184 kB' 'Inactive: 2698784 kB' 'Active(anon): 132300 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 123368 kB' 'Mapped: 48756 kB' 'Shmem: 10468 kB' 'KReclaimable: 82276 kB' 'Slab: 163292 kB' 'SReclaimable: 82276 kB' 'SUnreclaim: 81016 kB' 'KernelStack: 6752 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 355500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55508 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.222 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.223 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 01:02:12.224 nr_hugepages=1024 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 01:02:12.224 resv_hugepages=0 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 01:02:12.224 surplus_hugepages=0 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 01:02:12.224 anon_hugepages=0 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6645748 kB' 'MemAvailable: 9469616 kB' 'Buffers: 2436 kB' 'Cached: 3027704 kB' 'SwapCached: 0 kB' 'Active: 452792 kB' 'Inactive: 2698784 kB' 'Active(anon): 131908 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 123012 kB' 'Mapped: 48756 kB' 'Shmem: 10468 kB' 'KReclaimable: 82276 kB' 'Slab: 163292 kB' 'SReclaimable: 82276 kB' 'SUnreclaim: 81016 kB' 'KernelStack: 6768 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 355500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55508 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.224 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 01:02:12.225 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6645496 kB' 'MemUsed: 5596480 kB' 'SwapCached: 0 kB' 'Active: 453020 kB' 'Inactive: 2698784 kB' 'Active(anon): 132136 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 3030140 kB' 'Mapped: 48756 kB' 'AnonPages: 123240 kB' 'Shmem: 10468 kB' 'KernelStack: 6768 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82276 kB' 'Slab: 163292 kB' 'SReclaimable: 82276 kB' 'SUnreclaim: 81016 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.226 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 01:02:12.227 node0=1024 expecting 1024 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 01:02:12.227 01:02:12.227 real 0m1.185s 01:02:12.227 user 0m0.519s 01:02:12.227 sys 0m0.625s 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:12.227 10:59:17 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 01:02:12.227 ************************************ 01:02:12.227 END TEST default_setup 01:02:12.227 ************************************ 01:02:12.227 10:59:17 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 01:02:12.227 10:59:17 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 01:02:12.227 10:59:17 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:12.227 10:59:17 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:12.227 10:59:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 01:02:12.227 ************************************ 01:02:12.227 START TEST per_node_1G_alloc 01:02:12.227 ************************************ 01:02:12.227 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 01:02:12.227 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 01:02:12.227 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 01:02:12.227 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 01:02:12.227 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 01:02:12.227 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 01:02:12.227 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 01:02:12.227 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 01:02:12.227 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 01:02:12.227 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 01:02:12.227 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 01:02:12.227 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 01:02:12.227 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 01:02:12.227 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 01:02:12.227 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 01:02:12.227 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 01:02:12.227 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 01:02:12.227 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 01:02:12.227 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 01:02:12.227 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 01:02:12.227 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 01:02:12.227 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 01:02:12.227 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 01:02:12.227 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 01:02:12.227 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 01:02:12.227 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:02:12.797 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:02:12.797 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 01:02:12.797 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 01:02:12.797 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 01:02:12.797 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 01:02:12.797 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 01:02:12.797 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 01:02:12.797 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 01:02:12.797 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 01:02:12.797 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 01:02:12.797 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 01:02:12.797 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 01:02:12.797 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 01:02:12.797 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 01:02:12.797 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 01:02:12.797 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 01:02:12.797 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 01:02:12.797 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:12.797 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 01:02:12.797 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 01:02:12.797 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 01:02:12.797 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:12.797 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.797 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.797 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7699632 kB' 'MemAvailable: 10523500 kB' 'Buffers: 2436 kB' 'Cached: 3027704 kB' 'SwapCached: 0 kB' 'Active: 453432 kB' 'Inactive: 2698788 kB' 'Active(anon): 132548 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698788 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 123680 kB' 'Mapped: 48916 kB' 'Shmem: 10468 kB' 'KReclaimable: 82272 kB' 'Slab: 163380 kB' 'SReclaimable: 82272 kB' 'SUnreclaim: 81108 kB' 'KernelStack: 6808 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 355500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55492 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 01:02:12.797 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.798 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7699884 kB' 'MemAvailable: 10523752 kB' 'Buffers: 2436 kB' 'Cached: 3027704 kB' 'SwapCached: 0 kB' 'Active: 453276 kB' 'Inactive: 2698788 kB' 'Active(anon): 132392 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698788 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 123496 kB' 'Mapped: 48856 kB' 'Shmem: 10468 kB' 'KReclaimable: 82272 kB' 'Slab: 163380 kB' 'SReclaimable: 82272 kB' 'SUnreclaim: 81108 kB' 'KernelStack: 6792 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 355500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55476 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.799 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.800 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7699884 kB' 'MemAvailable: 10523752 kB' 'Buffers: 2436 kB' 'Cached: 3027704 kB' 'SwapCached: 0 kB' 'Active: 453068 kB' 'Inactive: 2698788 kB' 'Active(anon): 132184 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698788 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 123344 kB' 'Mapped: 48756 kB' 'Shmem: 10468 kB' 'KReclaimable: 82272 kB' 'Slab: 163384 kB' 'SReclaimable: 82272 kB' 'SUnreclaim: 81112 kB' 'KernelStack: 6784 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 355500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55476 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.801 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.802 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 01:02:12.803 nr_hugepages=512 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 01:02:12.803 resv_hugepages=0 01:02:12.803 surplus_hugepages=0 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 01:02:12.803 anon_hugepages=0 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7699884 kB' 'MemAvailable: 10523752 kB' 'Buffers: 2436 kB' 'Cached: 3027704 kB' 'SwapCached: 0 kB' 'Active: 452996 kB' 'Inactive: 2698788 kB' 'Active(anon): 132112 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698788 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 123228 kB' 'Mapped: 48756 kB' 'Shmem: 10468 kB' 'KReclaimable: 82272 kB' 'Slab: 163384 kB' 'SReclaimable: 82272 kB' 'SUnreclaim: 81112 kB' 'KernelStack: 6768 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 355500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55476 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.803 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.804 10:59:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.804 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.804 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.804 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.804 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.804 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.804 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.804 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.804 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:12.804 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:12.804 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:12.804 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:12.804 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.064 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7699884 kB' 'MemUsed: 4542092 kB' 'SwapCached: 0 kB' 'Active: 452996 kB' 'Inactive: 2698788 kB' 'Active(anon): 132112 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698788 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'FilePages: 3030140 kB' 'Mapped: 48756 kB' 'AnonPages: 123228 kB' 'Shmem: 10468 kB' 'KernelStack: 6768 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82272 kB' 'Slab: 163384 kB' 'SReclaimable: 82272 kB' 'SUnreclaim: 81112 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.065 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 01:02:13.066 node0=512 expecting 512 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 01:02:13.066 01:02:13.066 real 0m0.714s 01:02:13.066 user 0m0.330s 01:02:13.066 sys 0m0.430s 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:13.066 10:59:18 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 01:02:13.066 ************************************ 01:02:13.066 END TEST per_node_1G_alloc 01:02:13.066 ************************************ 01:02:13.066 10:59:18 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 01:02:13.066 10:59:18 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 01:02:13.066 10:59:18 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:13.066 10:59:18 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:13.066 10:59:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 01:02:13.066 ************************************ 01:02:13.066 START TEST even_2G_alloc 01:02:13.066 ************************************ 01:02:13.066 10:59:18 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 01:02:13.066 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 01:02:13.066 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 01:02:13.066 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 01:02:13.066 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 01:02:13.066 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 01:02:13.066 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 01:02:13.066 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 01:02:13.066 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 01:02:13.066 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 01:02:13.066 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 01:02:13.066 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 01:02:13.066 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 01:02:13.066 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 01:02:13.066 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 01:02:13.066 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 01:02:13.066 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 01:02:13.066 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 01:02:13.066 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 01:02:13.066 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 01:02:13.066 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 01:02:13.066 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 01:02:13.066 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 01:02:13.066 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 01:02:13.066 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:02:13.638 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:02:13.638 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 01:02:13.638 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6663272 kB' 'MemAvailable: 9487144 kB' 'Buffers: 2436 kB' 'Cached: 3027708 kB' 'SwapCached: 0 kB' 'Active: 453172 kB' 'Inactive: 2698792 kB' 'Active(anon): 132288 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 123396 kB' 'Mapped: 48884 kB' 'Shmem: 10468 kB' 'KReclaimable: 82272 kB' 'Slab: 163304 kB' 'SReclaimable: 82272 kB' 'SUnreclaim: 81032 kB' 'KernelStack: 6792 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 355500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55492 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.638 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6663524 kB' 'MemAvailable: 9487396 kB' 'Buffers: 2436 kB' 'Cached: 3027708 kB' 'SwapCached: 0 kB' 'Active: 453016 kB' 'Inactive: 2698792 kB' 'Active(anon): 132132 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 123280 kB' 'Mapped: 48756 kB' 'Shmem: 10468 kB' 'KReclaimable: 82272 kB' 'Slab: 163304 kB' 'SReclaimable: 82272 kB' 'SUnreclaim: 81032 kB' 'KernelStack: 6784 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 355500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55492 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.639 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.640 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6663788 kB' 'MemAvailable: 9487660 kB' 'Buffers: 2436 kB' 'Cached: 3027708 kB' 'SwapCached: 0 kB' 'Active: 453276 kB' 'Inactive: 2698792 kB' 'Active(anon): 132392 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 123540 kB' 'Mapped: 48756 kB' 'Shmem: 10468 kB' 'KReclaimable: 82272 kB' 'Slab: 163304 kB' 'SReclaimable: 82272 kB' 'SUnreclaim: 81032 kB' 'KernelStack: 6784 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 355500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55492 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.641 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.642 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 01:02:13.643 nr_hugepages=1024 01:02:13.643 resv_hugepages=0 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 01:02:13.643 surplus_hugepages=0 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 01:02:13.643 anon_hugepages=0 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6663980 kB' 'MemAvailable: 9487852 kB' 'Buffers: 2436 kB' 'Cached: 3027708 kB' 'SwapCached: 0 kB' 'Active: 453148 kB' 'Inactive: 2698792 kB' 'Active(anon): 132264 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 123412 kB' 'Mapped: 48756 kB' 'Shmem: 10468 kB' 'KReclaimable: 82272 kB' 'Slab: 163300 kB' 'SReclaimable: 82272 kB' 'SUnreclaim: 81028 kB' 'KernelStack: 6736 kB' 'PageTables: 4056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 355500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55476 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.643 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.644 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6663980 kB' 'MemUsed: 5577996 kB' 'SwapCached: 0 kB' 'Active: 453100 kB' 'Inactive: 2698792 kB' 'Active(anon): 132216 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'FilePages: 3030144 kB' 'Mapped: 48756 kB' 'AnonPages: 123368 kB' 'Shmem: 10468 kB' 'KernelStack: 6784 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82272 kB' 'Slab: 163300 kB' 'SReclaimable: 82272 kB' 'SUnreclaim: 81028 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.645 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 01:02:13.646 node0=1024 expecting 1024 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 01:02:13.646 01:02:13.646 real 0m0.713s 01:02:13.646 user 0m0.337s 01:02:13.646 sys 0m0.422s 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:13.646 10:59:18 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 01:02:13.646 ************************************ 01:02:13.646 END TEST even_2G_alloc 01:02:13.646 ************************************ 01:02:13.905 10:59:18 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 01:02:13.905 10:59:18 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 01:02:13.905 10:59:18 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:13.905 10:59:18 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:13.905 10:59:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 01:02:13.905 ************************************ 01:02:13.905 START TEST odd_alloc 01:02:13.905 ************************************ 01:02:13.905 10:59:18 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 01:02:13.905 10:59:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 01:02:13.905 10:59:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 01:02:13.905 10:59:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 01:02:13.905 10:59:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 01:02:13.905 10:59:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 01:02:13.905 10:59:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 01:02:13.905 10:59:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 01:02:13.905 10:59:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 01:02:13.905 10:59:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 01:02:13.905 10:59:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 01:02:13.905 10:59:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 01:02:13.905 10:59:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 01:02:13.905 10:59:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 01:02:13.905 10:59:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 01:02:13.905 10:59:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 01:02:13.905 10:59:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 01:02:13.905 10:59:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 01:02:13.905 10:59:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 01:02:13.905 10:59:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 01:02:13.905 10:59:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 01:02:13.905 10:59:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 01:02:13.905 10:59:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 01:02:13.905 10:59:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 01:02:13.905 10:59:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:02:14.163 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:02:14.426 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 01:02:14.426 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6660956 kB' 'MemAvailable: 9484828 kB' 'Buffers: 2436 kB' 'Cached: 3027708 kB' 'SwapCached: 0 kB' 'Active: 453348 kB' 'Inactive: 2698792 kB' 'Active(anon): 132464 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 123308 kB' 'Mapped: 48884 kB' 'Shmem: 10468 kB' 'KReclaimable: 82272 kB' 'Slab: 163312 kB' 'SReclaimable: 82272 kB' 'SUnreclaim: 81040 kB' 'KernelStack: 6800 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 355500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55508 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.426 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6660956 kB' 'MemAvailable: 9484828 kB' 'Buffers: 2436 kB' 'Cached: 3027708 kB' 'SwapCached: 0 kB' 'Active: 453136 kB' 'Inactive: 2698792 kB' 'Active(anon): 132252 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 123392 kB' 'Mapped: 48756 kB' 'Shmem: 10468 kB' 'KReclaimable: 82272 kB' 'Slab: 163312 kB' 'SReclaimable: 82272 kB' 'SUnreclaim: 81040 kB' 'KernelStack: 6784 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 355500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55508 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.427 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.428 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6660956 kB' 'MemAvailable: 9484828 kB' 'Buffers: 2436 kB' 'Cached: 3027708 kB' 'SwapCached: 0 kB' 'Active: 453120 kB' 'Inactive: 2698792 kB' 'Active(anon): 132236 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 123388 kB' 'Mapped: 48756 kB' 'Shmem: 10468 kB' 'KReclaimable: 82272 kB' 'Slab: 163312 kB' 'SReclaimable: 82272 kB' 'SUnreclaim: 81040 kB' 'KernelStack: 6784 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 355500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55492 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.429 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.430 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 01:02:14.431 nr_hugepages=1025 01:02:14.431 resv_hugepages=0 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 01:02:14.431 surplus_hugepages=0 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 01:02:14.431 anon_hugepages=0 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6660956 kB' 'MemAvailable: 9484828 kB' 'Buffers: 2436 kB' 'Cached: 3027708 kB' 'SwapCached: 0 kB' 'Active: 452896 kB' 'Inactive: 2698792 kB' 'Active(anon): 132012 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 123128 kB' 'Mapped: 48756 kB' 'Shmem: 10468 kB' 'KReclaimable: 82272 kB' 'Slab: 163312 kB' 'SReclaimable: 82272 kB' 'SUnreclaim: 81040 kB' 'KernelStack: 6784 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 355500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55492 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.431 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 01:02:14.432 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6660956 kB' 'MemUsed: 5581020 kB' 'SwapCached: 0 kB' 'Active: 453112 kB' 'Inactive: 2698792 kB' 'Active(anon): 132228 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'FilePages: 3030144 kB' 'Mapped: 49016 kB' 'AnonPages: 123372 kB' 'Shmem: 10468 kB' 'KernelStack: 6832 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82272 kB' 'Slab: 163312 kB' 'SReclaimable: 82272 kB' 'SUnreclaim: 81040 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.433 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 01:02:14.434 node0=1025 expecting 1025 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 01:02:14.434 01:02:14.434 real 0m0.700s 01:02:14.434 user 0m0.318s 01:02:14.434 sys 0m0.428s 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:14.434 ************************************ 01:02:14.434 END TEST odd_alloc 01:02:14.434 ************************************ 01:02:14.434 10:59:19 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 01:02:14.434 10:59:19 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 01:02:14.434 10:59:19 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 01:02:14.434 10:59:19 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:14.434 10:59:19 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:14.434 10:59:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 01:02:14.693 ************************************ 01:02:14.693 START TEST custom_alloc 01:02:14.693 ************************************ 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 01:02:14.693 10:59:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:02:14.952 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:02:14.952 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 01:02:14.952 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7708516 kB' 'MemAvailable: 10532388 kB' 'Buffers: 2436 kB' 'Cached: 3027708 kB' 'SwapCached: 0 kB' 'Active: 453220 kB' 'Inactive: 2698792 kB' 'Active(anon): 132336 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 123440 kB' 'Mapped: 48872 kB' 'Shmem: 10468 kB' 'KReclaimable: 82272 kB' 'Slab: 163148 kB' 'SReclaimable: 82272 kB' 'SUnreclaim: 80876 kB' 'KernelStack: 6792 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 355500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55524 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.216 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.217 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7708560 kB' 'MemAvailable: 10532432 kB' 'Buffers: 2436 kB' 'Cached: 3027708 kB' 'SwapCached: 0 kB' 'Active: 453040 kB' 'Inactive: 2698792 kB' 'Active(anon): 132156 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 123268 kB' 'Mapped: 48756 kB' 'Shmem: 10468 kB' 'KReclaimable: 82272 kB' 'Slab: 163148 kB' 'SReclaimable: 82272 kB' 'SUnreclaim: 80876 kB' 'KernelStack: 6768 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 355500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55492 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.218 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7709228 kB' 'MemAvailable: 10533100 kB' 'Buffers: 2436 kB' 'Cached: 3027708 kB' 'SwapCached: 0 kB' 'Active: 453032 kB' 'Inactive: 2698792 kB' 'Active(anon): 132148 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 123248 kB' 'Mapped: 48756 kB' 'Shmem: 10468 kB' 'KReclaimable: 82272 kB' 'Slab: 163148 kB' 'SReclaimable: 82272 kB' 'SUnreclaim: 80876 kB' 'KernelStack: 6768 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 355500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55492 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.219 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.220 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 01:02:15.221 nr_hugepages=512 01:02:15.221 resv_hugepages=0 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 01:02:15.221 surplus_hugepages=0 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 01:02:15.221 anon_hugepages=0 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7709756 kB' 'MemAvailable: 10533628 kB' 'Buffers: 2436 kB' 'Cached: 3027708 kB' 'SwapCached: 0 kB' 'Active: 453124 kB' 'Inactive: 2698792 kB' 'Active(anon): 132240 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 123356 kB' 'Mapped: 48756 kB' 'Shmem: 10468 kB' 'KReclaimable: 82272 kB' 'Slab: 163148 kB' 'SReclaimable: 82272 kB' 'SUnreclaim: 80876 kB' 'KernelStack: 6784 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 355500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55492 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.221 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.222 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7710924 kB' 'MemUsed: 4531052 kB' 'SwapCached: 0 kB' 'Active: 453084 kB' 'Inactive: 2698792 kB' 'Active(anon): 132200 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'FilePages: 3030144 kB' 'Mapped: 48756 kB' 'AnonPages: 123320 kB' 'Shmem: 10468 kB' 'KernelStack: 6768 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82272 kB' 'Slab: 163148 kB' 'SReclaimable: 82272 kB' 'SUnreclaim: 80876 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.223 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 01:02:15.224 node0=512 expecting 512 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 01:02:15.224 01:02:15.224 real 0m0.733s 01:02:15.224 user 0m0.320s 01:02:15.224 sys 0m0.415s 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:15.224 10:59:20 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 01:02:15.224 ************************************ 01:02:15.224 END TEST custom_alloc 01:02:15.224 ************************************ 01:02:15.484 10:59:20 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 01:02:15.484 10:59:20 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 01:02:15.484 10:59:20 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:15.484 10:59:20 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:15.484 10:59:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 01:02:15.484 ************************************ 01:02:15.484 START TEST no_shrink_alloc 01:02:15.484 ************************************ 01:02:15.484 10:59:20 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 01:02:15.484 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 01:02:15.484 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 01:02:15.484 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 01:02:15.484 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 01:02:15.484 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 01:02:15.484 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 01:02:15.484 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 01:02:15.484 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 01:02:15.484 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 01:02:15.484 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 01:02:15.484 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 01:02:15.484 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 01:02:15.484 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 01:02:15.484 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 01:02:15.484 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 01:02:15.484 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 01:02:15.484 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 01:02:15.484 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 01:02:15.484 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 01:02:15.484 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 01:02:15.484 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 01:02:15.484 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:02:15.742 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:02:15.742 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 01:02:15.742 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6668380 kB' 'MemAvailable: 9492248 kB' 'Buffers: 2436 kB' 'Cached: 3027708 kB' 'SwapCached: 0 kB' 'Active: 448284 kB' 'Inactive: 2698792 kB' 'Active(anon): 127400 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 118456 kB' 'Mapped: 48108 kB' 'Shmem: 10468 kB' 'KReclaimable: 82264 kB' 'Slab: 162860 kB' 'SReclaimable: 82264 kB' 'SUnreclaim: 80596 kB' 'KernelStack: 6752 kB' 'PageTables: 3736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55396 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.005 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6669052 kB' 'MemAvailable: 9492920 kB' 'Buffers: 2436 kB' 'Cached: 3027708 kB' 'SwapCached: 0 kB' 'Active: 448224 kB' 'Inactive: 2698792 kB' 'Active(anon): 127340 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 118652 kB' 'Mapped: 48016 kB' 'Shmem: 10468 kB' 'KReclaimable: 82264 kB' 'Slab: 162856 kB' 'SReclaimable: 82264 kB' 'SUnreclaim: 80592 kB' 'KernelStack: 6720 kB' 'PageTables: 3632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55380 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.006 10:59:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.006 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6669052 kB' 'MemAvailable: 9492920 kB' 'Buffers: 2436 kB' 'Cached: 3027708 kB' 'SwapCached: 0 kB' 'Active: 448408 kB' 'Inactive: 2698792 kB' 'Active(anon): 127524 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 118872 kB' 'Mapped: 48016 kB' 'Shmem: 10468 kB' 'KReclaimable: 82264 kB' 'Slab: 162848 kB' 'SReclaimable: 82264 kB' 'SUnreclaim: 80584 kB' 'KernelStack: 6720 kB' 'PageTables: 3640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55364 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.007 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 01:02:16.008 nr_hugepages=1024 01:02:16.008 resv_hugepages=0 01:02:16.008 surplus_hugepages=0 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 01:02:16.008 anon_hugepages=0 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6669052 kB' 'MemAvailable: 9492920 kB' 'Buffers: 2436 kB' 'Cached: 3027708 kB' 'SwapCached: 0 kB' 'Active: 448244 kB' 'Inactive: 2698792 kB' 'Active(anon): 127360 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 118764 kB' 'Mapped: 48016 kB' 'Shmem: 10468 kB' 'KReclaimable: 82264 kB' 'Slab: 162848 kB' 'SReclaimable: 82264 kB' 'SUnreclaim: 80584 kB' 'KernelStack: 6736 kB' 'PageTables: 3692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55364 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.008 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6669052 kB' 'MemUsed: 5572924 kB' 'SwapCached: 0 kB' 'Active: 448212 kB' 'Inactive: 2698792 kB' 'Active(anon): 127328 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 3030144 kB' 'Mapped: 48016 kB' 'AnonPages: 118680 kB' 'Shmem: 10468 kB' 'KernelStack: 6720 kB' 'PageTables: 3640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82264 kB' 'Slab: 162848 kB' 'SReclaimable: 82264 kB' 'SUnreclaim: 80584 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.009 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 01:02:16.010 node0=1024 expecting 1024 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 01:02:16.010 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:02:16.624 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:02:16.624 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 01:02:16.624 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 01:02:16.624 INFO: Requested 512 hugepages but 1024 already allocated on node0 01:02:16.624 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 01:02:16.624 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 01:02:16.624 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 01:02:16.624 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 01:02:16.624 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 01:02:16.624 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 01:02:16.624 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 01:02:16.624 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 01:02:16.624 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 01:02:16.624 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 01:02:16.624 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 01:02:16.624 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6668804 kB' 'MemAvailable: 9492672 kB' 'Buffers: 2436 kB' 'Cached: 3027708 kB' 'SwapCached: 0 kB' 'Active: 448860 kB' 'Inactive: 2698792 kB' 'Active(anon): 127976 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 119096 kB' 'Mapped: 48144 kB' 'Shmem: 10468 kB' 'KReclaimable: 82264 kB' 'Slab: 162928 kB' 'SReclaimable: 82264 kB' 'SUnreclaim: 80664 kB' 'KernelStack: 6792 kB' 'PageTables: 3968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 338508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55476 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.625 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6668844 kB' 'MemAvailable: 9492712 kB' 'Buffers: 2436 kB' 'Cached: 3027708 kB' 'SwapCached: 0 kB' 'Active: 448144 kB' 'Inactive: 2698792 kB' 'Active(anon): 127260 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 118664 kB' 'Mapped: 48016 kB' 'Shmem: 10468 kB' 'KReclaimable: 82264 kB' 'Slab: 162940 kB' 'SReclaimable: 82264 kB' 'SUnreclaim: 80676 kB' 'KernelStack: 6688 kB' 'PageTables: 3752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55364 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.626 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.627 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6668620 kB' 'MemAvailable: 9492488 kB' 'Buffers: 2436 kB' 'Cached: 3027708 kB' 'SwapCached: 0 kB' 'Active: 448524 kB' 'Inactive: 2698792 kB' 'Active(anon): 127640 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 118768 kB' 'Mapped: 48016 kB' 'Shmem: 10468 kB' 'KReclaimable: 82264 kB' 'Slab: 162964 kB' 'SReclaimable: 82264 kB' 'SUnreclaim: 80700 kB' 'KernelStack: 6672 kB' 'PageTables: 3740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55364 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.628 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.629 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 01:02:16.630 nr_hugepages=1024 01:02:16.630 resv_hugepages=0 01:02:16.630 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 01:02:16.630 surplus_hugepages=0 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 01:02:16.631 anon_hugepages=0 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6668620 kB' 'MemAvailable: 9492488 kB' 'Buffers: 2436 kB' 'Cached: 3027708 kB' 'SwapCached: 0 kB' 'Active: 448524 kB' 'Inactive: 2698792 kB' 'Active(anon): 127640 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 118768 kB' 'Mapped: 48016 kB' 'Shmem: 10468 kB' 'KReclaimable: 82264 kB' 'Slab: 162964 kB' 'SReclaimable: 82264 kB' 'SUnreclaim: 80700 kB' 'KernelStack: 6672 kB' 'PageTables: 3740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55364 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.631 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.632 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6668620 kB' 'MemUsed: 5573356 kB' 'SwapCached: 0 kB' 'Active: 448440 kB' 'Inactive: 2698792 kB' 'Active(anon): 127556 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2698792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 3030144 kB' 'Mapped: 48016 kB' 'AnonPages: 118652 kB' 'Shmem: 10468 kB' 'KernelStack: 6656 kB' 'PageTables: 3688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82264 kB' 'Slab: 162964 kB' 'SReclaimable: 82264 kB' 'SUnreclaim: 80700 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.633 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.634 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 01:02:16.635 node0=1024 expecting 1024 01:02:16.635 10:59:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 01:02:16.636 01:02:16.636 real 0m1.391s 01:02:16.636 user 0m0.624s 01:02:16.636 sys 0m0.835s 01:02:16.636 10:59:21 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:16.636 10:59:21 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 01:02:16.636 ************************************ 01:02:16.636 END TEST no_shrink_alloc 01:02:16.636 ************************************ 01:02:16.894 10:59:21 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 01:02:16.894 10:59:21 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 01:02:16.894 10:59:21 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 01:02:16.894 10:59:21 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 01:02:16.894 10:59:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 01:02:16.894 10:59:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 01:02:16.894 10:59:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 01:02:16.894 10:59:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 01:02:16.894 10:59:21 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 01:02:16.894 10:59:21 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 01:02:16.894 01:02:16.894 real 0m6.006s 01:02:16.894 user 0m2.667s 01:02:16.894 sys 0m3.499s 01:02:16.894 10:59:21 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:16.894 10:59:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 01:02:16.894 ************************************ 01:02:16.894 END TEST hugepages 01:02:16.894 ************************************ 01:02:16.894 10:59:21 setup.sh -- common/autotest_common.sh@1142 -- # return 0 01:02:16.894 10:59:21 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 01:02:16.894 10:59:21 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:16.894 10:59:21 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:16.894 10:59:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 01:02:16.894 ************************************ 01:02:16.894 START TEST driver 01:02:16.894 ************************************ 01:02:16.894 10:59:21 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 01:02:16.894 * Looking for test storage... 01:02:16.894 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 01:02:16.894 10:59:22 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 01:02:16.894 10:59:22 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 01:02:16.894 10:59:22 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:02:17.826 10:59:22 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 01:02:17.826 10:59:22 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:17.826 10:59:22 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:17.826 10:59:22 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 01:02:17.826 ************************************ 01:02:17.826 START TEST guess_driver 01:02:17.826 ************************************ 01:02:17.826 10:59:22 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 01:02:17.826 10:59:22 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 01:02:17.826 10:59:22 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 01:02:17.826 10:59:22 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 01:02:17.826 10:59:22 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 01:02:17.826 10:59:22 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 01:02:17.826 10:59:22 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 01:02:17.826 10:59:22 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 01:02:17.826 10:59:22 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 01:02:17.826 10:59:22 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 01:02:17.826 10:59:22 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 01:02:17.826 10:59:22 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 01:02:17.826 10:59:22 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 01:02:17.826 10:59:22 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 01:02:17.826 10:59:22 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 01:02:17.826 10:59:22 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 01:02:17.826 10:59:22 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 01:02:17.826 10:59:22 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 01:02:17.826 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 01:02:17.826 10:59:22 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 01:02:17.826 Looking for driver=uio_pci_generic 01:02:17.826 10:59:22 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 01:02:17.827 10:59:22 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 01:02:17.827 10:59:22 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 01:02:17.827 10:59:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 01:02:17.827 10:59:22 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 01:02:17.827 10:59:22 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 01:02:17.827 10:59:22 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 01:02:18.756 10:59:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 01:02:18.756 10:59:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 01:02:18.756 10:59:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 01:02:18.757 10:59:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 01:02:18.757 10:59:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 01:02:18.757 10:59:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 01:02:18.757 10:59:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 01:02:18.757 10:59:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 01:02:18.757 10:59:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 01:02:19.013 10:59:24 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 01:02:19.013 10:59:24 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 01:02:19.013 10:59:24 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 01:02:19.013 10:59:24 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:02:19.947 01:02:19.947 real 0m1.927s 01:02:19.947 user 0m0.674s 01:02:19.947 sys 0m1.316s 01:02:19.947 10:59:24 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:19.947 10:59:24 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 01:02:19.947 ************************************ 01:02:19.947 END TEST guess_driver 01:02:19.947 ************************************ 01:02:19.947 10:59:24 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 01:02:19.947 ************************************ 01:02:19.947 END TEST driver 01:02:19.947 01:02:19.947 real 0m2.939s 01:02:19.947 user 0m1.022s 01:02:19.947 sys 0m2.078s 01:02:19.947 10:59:24 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:19.947 10:59:24 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 01:02:19.947 ************************************ 01:02:19.947 10:59:24 setup.sh -- common/autotest_common.sh@1142 -- # return 0 01:02:19.947 10:59:24 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 01:02:19.947 10:59:24 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:19.947 10:59:24 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:19.947 10:59:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 01:02:19.947 ************************************ 01:02:19.947 START TEST devices 01:02:19.947 ************************************ 01:02:19.947 10:59:24 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 01:02:19.947 * Looking for test storage... 01:02:19.947 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 01:02:19.947 10:59:25 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 01:02:19.947 10:59:25 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 01:02:19.947 10:59:25 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 01:02:19.947 10:59:25 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:02:20.883 10:59:26 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 01:02:20.883 10:59:26 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 01:02:20.883 10:59:26 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 01:02:20.883 10:59:26 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 01:02:20.883 10:59:26 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 01:02:20.883 10:59:26 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 01:02:20.883 10:59:26 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 01:02:20.883 10:59:26 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:02:20.883 10:59:26 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:02:20.883 10:59:26 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 01:02:20.883 10:59:26 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 01:02:20.883 10:59:26 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 01:02:20.883 10:59:26 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 01:02:20.883 10:59:26 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:02:20.883 10:59:26 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 01:02:20.883 10:59:26 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 01:02:20.883 10:59:26 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 01:02:20.883 10:59:26 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 01:02:20.883 10:59:26 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:02:20.883 10:59:26 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 01:02:20.883 10:59:26 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 01:02:20.883 10:59:26 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 01:02:20.883 10:59:26 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:02:20.883 10:59:26 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:02:20.883 10:59:26 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 01:02:20.883 10:59:26 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 01:02:20.883 10:59:26 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 01:02:20.883 10:59:26 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 01:02:20.883 10:59:26 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 01:02:20.883 10:59:26 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 01:02:20.883 10:59:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 01:02:20.883 10:59:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 01:02:20.883 10:59:26 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 01:02:20.883 10:59:26 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 01:02:20.883 10:59:26 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 01:02:20.883 10:59:26 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 01:02:20.883 10:59:26 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 01:02:21.142 No valid GPT data, bailing 01:02:21.142 10:59:26 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:02:21.142 10:59:26 setup.sh.devices -- scripts/common.sh@391 -- # pt= 01:02:21.142 10:59:26 setup.sh.devices -- scripts/common.sh@392 -- # return 1 01:02:21.142 10:59:26 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 01:02:21.142 10:59:26 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 01:02:21.142 10:59:26 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 01:02:21.142 10:59:26 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 01:02:21.142 10:59:26 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 01:02:21.142 10:59:26 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 01:02:21.142 10:59:26 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 01:02:21.142 10:59:26 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 01:02:21.142 10:59:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 01:02:21.142 10:59:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 01:02:21.142 10:59:26 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 01:02:21.142 10:59:26 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 01:02:21.142 10:59:26 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 01:02:21.142 10:59:26 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 01:02:21.142 10:59:26 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 01:02:21.142 No valid GPT data, bailing 01:02:21.142 10:59:26 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 01:02:21.142 10:59:26 setup.sh.devices -- scripts/common.sh@391 -- # pt= 01:02:21.142 10:59:26 setup.sh.devices -- scripts/common.sh@392 -- # return 1 01:02:21.142 10:59:26 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 01:02:21.142 10:59:26 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 01:02:21.142 10:59:26 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 01:02:21.142 10:59:26 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 01:02:21.142 10:59:26 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 01:02:21.142 10:59:26 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 01:02:21.142 10:59:26 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 01:02:21.142 10:59:26 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 01:02:21.142 10:59:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 01:02:21.142 10:59:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 01:02:21.142 10:59:26 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 01:02:21.142 10:59:26 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 01:02:21.142 10:59:26 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 01:02:21.142 10:59:26 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 01:02:21.142 10:59:26 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 01:02:21.142 No valid GPT data, bailing 01:02:21.142 10:59:26 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 01:02:21.142 10:59:26 setup.sh.devices -- scripts/common.sh@391 -- # pt= 01:02:21.142 10:59:26 setup.sh.devices -- scripts/common.sh@392 -- # return 1 01:02:21.142 10:59:26 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 01:02:21.142 10:59:26 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 01:02:21.142 10:59:26 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 01:02:21.142 10:59:26 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 01:02:21.142 10:59:26 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 01:02:21.142 10:59:26 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 01:02:21.142 10:59:26 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 01:02:21.142 10:59:26 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 01:02:21.142 10:59:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 01:02:21.142 10:59:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 01:02:21.142 10:59:26 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 01:02:21.142 10:59:26 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 01:02:21.142 10:59:26 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 01:02:21.142 10:59:26 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 01:02:21.142 10:59:26 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 01:02:21.142 No valid GPT data, bailing 01:02:21.401 10:59:26 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 01:02:21.401 10:59:26 setup.sh.devices -- scripts/common.sh@391 -- # pt= 01:02:21.401 10:59:26 setup.sh.devices -- scripts/common.sh@392 -- # return 1 01:02:21.401 10:59:26 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 01:02:21.401 10:59:26 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 01:02:21.401 10:59:26 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 01:02:21.401 10:59:26 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 01:02:21.401 10:59:26 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 01:02:21.401 10:59:26 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 01:02:21.401 10:59:26 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 01:02:21.401 10:59:26 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 01:02:21.401 10:59:26 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 01:02:21.401 10:59:26 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 01:02:21.401 10:59:26 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:21.401 10:59:26 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:21.401 10:59:26 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 01:02:21.401 ************************************ 01:02:21.401 START TEST nvme_mount 01:02:21.401 ************************************ 01:02:21.401 10:59:26 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 01:02:21.401 10:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 01:02:21.401 10:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 01:02:21.401 10:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 01:02:21.401 10:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 01:02:21.401 10:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 01:02:21.401 10:59:26 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 01:02:21.401 10:59:26 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 01:02:21.401 10:59:26 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 01:02:21.401 10:59:26 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 01:02:21.401 10:59:26 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 01:02:21.401 10:59:26 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 01:02:21.401 10:59:26 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 01:02:21.401 10:59:26 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 01:02:21.401 10:59:26 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 01:02:21.401 10:59:26 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 01:02:21.401 10:59:26 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 01:02:21.401 10:59:26 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 01:02:21.401 10:59:26 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 01:02:21.401 10:59:26 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 01:02:22.337 Creating new GPT entries in memory. 01:02:22.337 GPT data structures destroyed! You may now partition the disk using fdisk or 01:02:22.337 other utilities. 01:02:22.337 10:59:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 01:02:22.337 10:59:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 01:02:22.337 10:59:27 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 01:02:22.337 10:59:27 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 01:02:22.337 10:59:27 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 01:02:23.270 Creating new GPT entries in memory. 01:02:23.270 The operation has completed successfully. 01:02:23.270 10:59:28 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 01:02:23.270 10:59:28 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 01:02:23.270 10:59:28 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 69517 01:02:23.529 10:59:28 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 01:02:23.529 10:59:28 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 01:02:23.529 10:59:28 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 01:02:23.529 10:59:28 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 01:02:23.529 10:59:28 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 01:02:23.529 10:59:28 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 01:02:23.529 10:59:28 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 01:02:23.529 10:59:28 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 01:02:23.529 10:59:28 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 01:02:23.529 10:59:28 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 01:02:23.529 10:59:28 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 01:02:23.529 10:59:28 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 01:02:23.529 10:59:28 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 01:02:23.529 10:59:28 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 01:02:23.529 10:59:28 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 01:02:23.529 10:59:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:02:23.529 10:59:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 01:02:23.529 10:59:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 01:02:23.529 10:59:28 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 01:02:23.529 10:59:28 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 01:02:23.788 10:59:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:02:23.788 10:59:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 01:02:23.788 10:59:28 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 01:02:23.788 10:59:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:02:23.788 10:59:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:02:23.788 10:59:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:02:23.788 10:59:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:02:23.788 10:59:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:02:24.047 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:02:24.047 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:02:24.047 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 01:02:24.047 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 01:02:24.047 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 01:02:24.047 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 01:02:24.047 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 01:02:24.047 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 01:02:24.047 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 01:02:24.047 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 01:02:24.047 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 01:02:24.047 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 01:02:24.047 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 01:02:24.047 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 01:02:24.047 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 01:02:24.305 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 01:02:24.305 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 01:02:24.305 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 01:02:24.305 /dev/nvme0n1: calling ioctl to re-read partition table: Success 01:02:24.305 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 01:02:24.305 10:59:29 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 01:02:24.305 10:59:29 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 01:02:24.305 10:59:29 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 01:02:24.305 10:59:29 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 01:02:24.305 10:59:29 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 01:02:24.305 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 01:02:24.305 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 01:02:24.305 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 01:02:24.305 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 01:02:24.305 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 01:02:24.305 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 01:02:24.305 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 01:02:24.305 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 01:02:24.305 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 01:02:24.305 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:02:24.305 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 01:02:24.305 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 01:02:24.305 10:59:29 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 01:02:24.305 10:59:29 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 01:02:24.882 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:02:24.883 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 01:02:24.883 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 01:02:24.883 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:02:24.883 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:02:24.883 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:02:24.883 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:02:24.883 10:59:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:02:24.883 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:02:24.883 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:02:25.141 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 01:02:25.141 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 01:02:25.141 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 01:02:25.141 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 01:02:25.141 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 01:02:25.141 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 01:02:25.141 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 01:02:25.141 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 01:02:25.141 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 01:02:25.141 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 01:02:25.141 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 01:02:25.141 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 01:02:25.141 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 01:02:25.141 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 01:02:25.141 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:02:25.141 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 01:02:25.141 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 01:02:25.141 10:59:30 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 01:02:25.141 10:59:30 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 01:02:25.401 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:02:25.401 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 01:02:25.401 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 01:02:25.401 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:02:25.401 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:02:25.401 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:02:25.660 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:02:25.660 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:02:25.660 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:02:25.660 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:02:25.660 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 01:02:25.660 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 01:02:25.660 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 01:02:25.660 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 01:02:25.660 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 01:02:25.660 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 01:02:25.660 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 01:02:25.660 10:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 01:02:25.660 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 01:02:25.660 01:02:25.660 real 0m4.484s 01:02:25.660 user 0m0.837s 01:02:25.660 sys 0m1.375s 01:02:25.660 10:59:30 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:25.660 10:59:30 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 01:02:25.660 ************************************ 01:02:25.660 END TEST nvme_mount 01:02:25.660 ************************************ 01:02:25.918 10:59:30 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 01:02:25.918 10:59:30 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 01:02:25.918 10:59:30 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:25.918 10:59:30 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:25.918 10:59:30 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 01:02:25.918 ************************************ 01:02:25.918 START TEST dm_mount 01:02:25.918 ************************************ 01:02:25.918 10:59:30 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 01:02:25.918 10:59:30 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 01:02:25.918 10:59:30 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 01:02:25.918 10:59:30 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 01:02:25.918 10:59:30 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 01:02:25.918 10:59:30 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 01:02:25.918 10:59:30 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 01:02:25.918 10:59:30 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 01:02:25.918 10:59:30 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 01:02:25.918 10:59:30 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 01:02:25.918 10:59:30 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 01:02:25.918 10:59:30 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 01:02:25.918 10:59:30 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 01:02:25.918 10:59:30 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 01:02:25.918 10:59:30 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 01:02:25.918 10:59:30 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 01:02:25.918 10:59:30 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 01:02:25.918 10:59:30 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 01:02:25.918 10:59:30 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 01:02:25.918 10:59:30 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 01:02:25.918 10:59:30 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 01:02:25.918 10:59:30 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 01:02:26.881 Creating new GPT entries in memory. 01:02:26.881 GPT data structures destroyed! You may now partition the disk using fdisk or 01:02:26.881 other utilities. 01:02:26.881 10:59:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 01:02:26.881 10:59:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 01:02:26.881 10:59:31 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 01:02:26.881 10:59:31 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 01:02:26.881 10:59:31 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 01:02:27.816 Creating new GPT entries in memory. 01:02:27.816 The operation has completed successfully. 01:02:27.816 10:59:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 01:02:27.816 10:59:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 01:02:27.816 10:59:32 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 01:02:27.816 10:59:32 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 01:02:27.816 10:59:32 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 01:02:29.189 The operation has completed successfully. 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 69954 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:02:29.189 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:02:29.447 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:02:29.448 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:02:29.706 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:02:29.706 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:02:29.706 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 01:02:29.706 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 01:02:29.706 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 01:02:29.706 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 01:02:29.706 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 01:02:29.706 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 01:02:29.706 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 01:02:29.706 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 01:02:29.706 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 01:02:29.706 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 01:02:29.706 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 01:02:29.706 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 01:02:29.706 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 01:02:29.706 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 01:02:29.706 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:02:29.706 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 01:02:29.706 10:59:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 01:02:29.706 10:59:34 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 01:02:29.706 10:59:34 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 01:02:29.964 10:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:02:29.964 10:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 01:02:29.964 10:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 01:02:29.964 10:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:02:29.964 10:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:02:29.964 10:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:02:30.222 10:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:02:30.222 10:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:02:30.222 10:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:02:30.222 10:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:02:30.222 10:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 01:02:30.222 10:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 01:02:30.222 10:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 01:02:30.222 10:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 01:02:30.222 10:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 01:02:30.222 10:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 01:02:30.222 10:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 01:02:30.481 10:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 01:02:30.481 10:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 01:02:30.481 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 01:02:30.481 10:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 01:02:30.481 10:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 01:02:30.481 01:02:30.481 real 0m4.537s 01:02:30.481 user 0m0.522s 01:02:30.481 sys 0m0.988s 01:02:30.481 10:59:35 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:30.481 ************************************ 01:02:30.481 END TEST dm_mount 01:02:30.481 ************************************ 01:02:30.481 10:59:35 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 01:02:30.481 10:59:35 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 01:02:30.481 10:59:35 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 01:02:30.481 10:59:35 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 01:02:30.481 10:59:35 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 01:02:30.481 10:59:35 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 01:02:30.481 10:59:35 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 01:02:30.481 10:59:35 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 01:02:30.481 10:59:35 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 01:02:30.740 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 01:02:30.740 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 01:02:30.740 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 01:02:30.740 /dev/nvme0n1: calling ioctl to re-read partition table: Success 01:02:30.740 10:59:35 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 01:02:30.740 10:59:35 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 01:02:30.740 10:59:35 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 01:02:30.740 10:59:35 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 01:02:30.740 10:59:35 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 01:02:30.740 10:59:35 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 01:02:30.740 10:59:35 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 01:02:30.740 01:02:30.740 real 0m10.846s 01:02:30.740 user 0m2.053s 01:02:30.740 sys 0m3.228s 01:02:30.740 10:59:35 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:30.740 10:59:35 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 01:02:30.740 ************************************ 01:02:30.740 END TEST devices 01:02:30.740 ************************************ 01:02:30.740 10:59:35 setup.sh -- common/autotest_common.sh@1142 -- # return 0 01:02:30.740 01:02:30.740 real 0m26.487s 01:02:30.740 user 0m8.322s 01:02:30.740 sys 0m12.940s 01:02:30.740 10:59:35 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:30.740 ************************************ 01:02:30.740 END TEST setup.sh 01:02:30.740 ************************************ 01:02:30.740 10:59:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 01:02:30.740 10:59:35 -- common/autotest_common.sh@1142 -- # return 0 01:02:30.740 10:59:35 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 01:02:31.675 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:02:31.675 Hugepages 01:02:31.675 node hugesize free / total 01:02:31.675 node0 1048576kB 0 / 0 01:02:31.675 node0 2048kB 2048 / 2048 01:02:31.675 01:02:31.675 Type BDF Vendor Device NUMA Driver Device Block devices 01:02:31.675 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 01:02:31.675 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 01:02:31.933 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 01:02:31.933 10:59:36 -- spdk/autotest.sh@130 -- # uname -s 01:02:31.933 10:59:36 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 01:02:31.933 10:59:36 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 01:02:31.933 10:59:36 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:02:32.501 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:02:32.760 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:02:32.760 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:02:32.760 10:59:37 -- common/autotest_common.sh@1532 -- # sleep 1 01:02:34.139 10:59:38 -- common/autotest_common.sh@1533 -- # bdfs=() 01:02:34.139 10:59:38 -- common/autotest_common.sh@1533 -- # local bdfs 01:02:34.139 10:59:38 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 01:02:34.139 10:59:38 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 01:02:34.139 10:59:38 -- common/autotest_common.sh@1513 -- # bdfs=() 01:02:34.139 10:59:38 -- common/autotest_common.sh@1513 -- # local bdfs 01:02:34.139 10:59:38 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 01:02:34.139 10:59:38 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:02:34.139 10:59:38 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 01:02:34.139 10:59:39 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 01:02:34.139 10:59:39 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 01:02:34.139 10:59:39 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:02:34.397 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:02:34.397 Waiting for block devices as requested 01:02:34.655 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:02:34.655 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:02:34.655 10:59:39 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 01:02:34.655 10:59:39 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 01:02:34.655 10:59:39 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 01:02:34.655 10:59:39 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 01:02:34.655 10:59:39 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 01:02:34.655 10:59:39 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 01:02:34.655 10:59:39 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 01:02:34.655 10:59:39 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 01:02:34.655 10:59:39 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 01:02:34.655 10:59:39 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 01:02:34.655 10:59:39 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 01:02:34.655 10:59:39 -- common/autotest_common.sh@1545 -- # grep oacs 01:02:34.655 10:59:39 -- common/autotest_common.sh@1545 -- # cut -d: -f2 01:02:34.655 10:59:39 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 01:02:34.655 10:59:39 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 01:02:34.655 10:59:39 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 01:02:34.655 10:59:39 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 01:02:34.655 10:59:39 -- common/autotest_common.sh@1554 -- # grep unvmcap 01:02:34.655 10:59:39 -- common/autotest_common.sh@1554 -- # cut -d: -f2 01:02:34.913 10:59:39 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 01:02:34.913 10:59:39 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 01:02:34.913 10:59:39 -- common/autotest_common.sh@1557 -- # continue 01:02:34.913 10:59:39 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 01:02:34.913 10:59:39 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 01:02:34.913 10:59:39 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 01:02:34.914 10:59:39 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 01:02:34.914 10:59:39 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 01:02:34.914 10:59:39 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 01:02:34.914 10:59:39 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 01:02:34.914 10:59:39 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 01:02:34.914 10:59:39 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 01:02:34.914 10:59:39 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 01:02:34.914 10:59:39 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 01:02:34.914 10:59:39 -- common/autotest_common.sh@1545 -- # grep oacs 01:02:34.914 10:59:39 -- common/autotest_common.sh@1545 -- # cut -d: -f2 01:02:34.914 10:59:39 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 01:02:34.914 10:59:39 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 01:02:34.914 10:59:39 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 01:02:34.914 10:59:39 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 01:02:34.914 10:59:39 -- common/autotest_common.sh@1554 -- # grep unvmcap 01:02:34.914 10:59:39 -- common/autotest_common.sh@1554 -- # cut -d: -f2 01:02:34.914 10:59:39 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 01:02:34.914 10:59:39 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 01:02:34.914 10:59:39 -- common/autotest_common.sh@1557 -- # continue 01:02:34.914 10:59:39 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 01:02:34.914 10:59:39 -- common/autotest_common.sh@728 -- # xtrace_disable 01:02:34.914 10:59:39 -- common/autotest_common.sh@10 -- # set +x 01:02:34.914 10:59:39 -- spdk/autotest.sh@138 -- # timing_enter afterboot 01:02:34.914 10:59:39 -- common/autotest_common.sh@722 -- # xtrace_disable 01:02:34.914 10:59:39 -- common/autotest_common.sh@10 -- # set +x 01:02:34.914 10:59:39 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:02:35.849 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:02:35.849 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:02:35.849 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:02:35.849 10:59:40 -- spdk/autotest.sh@140 -- # timing_exit afterboot 01:02:35.849 10:59:40 -- common/autotest_common.sh@728 -- # xtrace_disable 01:02:35.849 10:59:40 -- common/autotest_common.sh@10 -- # set +x 01:02:35.849 10:59:41 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 01:02:35.849 10:59:41 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 01:02:35.850 10:59:41 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 01:02:35.850 10:59:41 -- common/autotest_common.sh@1577 -- # bdfs=() 01:02:35.850 10:59:41 -- common/autotest_common.sh@1577 -- # local bdfs 01:02:35.850 10:59:41 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 01:02:35.850 10:59:41 -- common/autotest_common.sh@1513 -- # bdfs=() 01:02:35.850 10:59:41 -- common/autotest_common.sh@1513 -- # local bdfs 01:02:35.850 10:59:41 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 01:02:35.850 10:59:41 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:02:35.850 10:59:41 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 01:02:36.107 10:59:41 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 01:02:36.107 10:59:41 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 01:02:36.107 10:59:41 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 01:02:36.107 10:59:41 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 01:02:36.107 10:59:41 -- common/autotest_common.sh@1580 -- # device=0x0010 01:02:36.107 10:59:41 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 01:02:36.107 10:59:41 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 01:02:36.107 10:59:41 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 01:02:36.107 10:59:41 -- common/autotest_common.sh@1580 -- # device=0x0010 01:02:36.107 10:59:41 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 01:02:36.107 10:59:41 -- common/autotest_common.sh@1586 -- # printf '%s\n' 01:02:36.107 10:59:41 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 01:02:36.107 10:59:41 -- common/autotest_common.sh@1593 -- # return 0 01:02:36.107 10:59:41 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 01:02:36.107 10:59:41 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 01:02:36.107 10:59:41 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 01:02:36.107 10:59:41 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 01:02:36.107 10:59:41 -- spdk/autotest.sh@162 -- # timing_enter lib 01:02:36.107 10:59:41 -- common/autotest_common.sh@722 -- # xtrace_disable 01:02:36.107 10:59:41 -- common/autotest_common.sh@10 -- # set +x 01:02:36.107 10:59:41 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 01:02:36.107 10:59:41 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 01:02:36.107 10:59:41 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 01:02:36.107 10:59:41 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 01:02:36.107 10:59:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:36.107 10:59:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:36.107 10:59:41 -- common/autotest_common.sh@10 -- # set +x 01:02:36.107 ************************************ 01:02:36.107 START TEST env 01:02:36.107 ************************************ 01:02:36.107 10:59:41 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 01:02:36.107 * Looking for test storage... 01:02:36.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 01:02:36.107 10:59:41 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 01:02:36.107 10:59:41 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:36.107 10:59:41 env -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:36.107 10:59:41 env -- common/autotest_common.sh@10 -- # set +x 01:02:36.107 ************************************ 01:02:36.107 START TEST env_memory 01:02:36.107 ************************************ 01:02:36.107 10:59:41 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 01:02:36.365 01:02:36.365 01:02:36.365 CUnit - A unit testing framework for C - Version 2.1-3 01:02:36.365 http://cunit.sourceforge.net/ 01:02:36.365 01:02:36.365 01:02:36.365 Suite: memory 01:02:36.365 Test: alloc and free memory map ...[2024-07-22 10:59:41.337018] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 01:02:36.365 passed 01:02:36.365 Test: mem map translation ...[2024-07-22 10:59:41.359344] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 01:02:36.365 [2024-07-22 10:59:41.359397] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 01:02:36.365 [2024-07-22 10:59:41.359442] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 01:02:36.365 [2024-07-22 10:59:41.359455] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 01:02:36.365 passed 01:02:36.365 Test: mem map registration ...[2024-07-22 10:59:41.401556] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 01:02:36.365 [2024-07-22 10:59:41.401611] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 01:02:36.365 passed 01:02:36.365 Test: mem map adjacent registrations ...passed 01:02:36.365 01:02:36.365 Run Summary: Type Total Ran Passed Failed Inactive 01:02:36.365 suites 1 1 n/a 0 0 01:02:36.365 tests 4 4 4 0 0 01:02:36.365 asserts 152 152 152 0 n/a 01:02:36.365 01:02:36.365 Elapsed time = 0.163 seconds 01:02:36.365 01:02:36.365 real 0m0.176s 01:02:36.365 user 0m0.161s 01:02:36.365 sys 0m0.012s 01:02:36.365 10:59:41 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:36.365 ************************************ 01:02:36.365 END TEST env_memory 01:02:36.365 ************************************ 01:02:36.365 10:59:41 env.env_memory -- common/autotest_common.sh@10 -- # set +x 01:02:36.365 10:59:41 env -- common/autotest_common.sh@1142 -- # return 0 01:02:36.365 10:59:41 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 01:02:36.365 10:59:41 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:36.365 10:59:41 env -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:36.365 10:59:41 env -- common/autotest_common.sh@10 -- # set +x 01:02:36.365 ************************************ 01:02:36.365 START TEST env_vtophys 01:02:36.365 ************************************ 01:02:36.365 10:59:41 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 01:02:36.623 EAL: lib.eal log level changed from notice to debug 01:02:36.623 EAL: Detected lcore 0 as core 0 on socket 0 01:02:36.623 EAL: Detected lcore 1 as core 0 on socket 0 01:02:36.623 EAL: Detected lcore 2 as core 0 on socket 0 01:02:36.623 EAL: Detected lcore 3 as core 0 on socket 0 01:02:36.623 EAL: Detected lcore 4 as core 0 on socket 0 01:02:36.623 EAL: Detected lcore 5 as core 0 on socket 0 01:02:36.623 EAL: Detected lcore 6 as core 0 on socket 0 01:02:36.624 EAL: Detected lcore 7 as core 0 on socket 0 01:02:36.624 EAL: Detected lcore 8 as core 0 on socket 0 01:02:36.624 EAL: Detected lcore 9 as core 0 on socket 0 01:02:36.624 EAL: Maximum logical cores by configuration: 128 01:02:36.624 EAL: Detected CPU lcores: 10 01:02:36.624 EAL: Detected NUMA nodes: 1 01:02:36.624 EAL: Checking presence of .so 'librte_eal.so.24.0' 01:02:36.624 EAL: Detected shared linkage of DPDK 01:02:36.624 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 01:02:36.624 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 01:02:36.624 EAL: Registered [vdev] bus. 01:02:36.624 EAL: bus.vdev log level changed from disabled to notice 01:02:36.624 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 01:02:36.624 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 01:02:36.624 EAL: pmd.net.i40e.init log level changed from disabled to notice 01:02:36.624 EAL: pmd.net.i40e.driver log level changed from disabled to notice 01:02:36.624 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 01:02:36.624 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 01:02:36.624 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 01:02:36.624 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 01:02:36.624 EAL: No shared files mode enabled, IPC will be disabled 01:02:36.624 EAL: No shared files mode enabled, IPC is disabled 01:02:36.624 EAL: Selected IOVA mode 'PA' 01:02:36.624 EAL: Probing VFIO support... 01:02:36.624 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 01:02:36.624 EAL: VFIO modules not loaded, skipping VFIO support... 01:02:36.624 EAL: Ask a virtual area of 0x2e000 bytes 01:02:36.624 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 01:02:36.624 EAL: Setting up physically contiguous memory... 01:02:36.624 EAL: Setting maximum number of open files to 524288 01:02:36.624 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 01:02:36.624 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 01:02:36.624 EAL: Ask a virtual area of 0x61000 bytes 01:02:36.624 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 01:02:36.624 EAL: Memseg list allocated at socket 0, page size 0x800kB 01:02:36.624 EAL: Ask a virtual area of 0x400000000 bytes 01:02:36.624 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 01:02:36.624 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 01:02:36.624 EAL: Ask a virtual area of 0x61000 bytes 01:02:36.624 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 01:02:36.624 EAL: Memseg list allocated at socket 0, page size 0x800kB 01:02:36.624 EAL: Ask a virtual area of 0x400000000 bytes 01:02:36.624 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 01:02:36.624 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 01:02:36.624 EAL: Ask a virtual area of 0x61000 bytes 01:02:36.624 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 01:02:36.624 EAL: Memseg list allocated at socket 0, page size 0x800kB 01:02:36.624 EAL: Ask a virtual area of 0x400000000 bytes 01:02:36.624 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 01:02:36.624 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 01:02:36.624 EAL: Ask a virtual area of 0x61000 bytes 01:02:36.624 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 01:02:36.624 EAL: Memseg list allocated at socket 0, page size 0x800kB 01:02:36.624 EAL: Ask a virtual area of 0x400000000 bytes 01:02:36.624 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 01:02:36.624 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 01:02:36.624 EAL: Hugepages will be freed exactly as allocated. 01:02:36.624 EAL: No shared files mode enabled, IPC is disabled 01:02:36.624 EAL: No shared files mode enabled, IPC is disabled 01:02:36.624 EAL: TSC frequency is ~2490000 KHz 01:02:36.624 EAL: Main lcore 0 is ready (tid=7ff37be2ea00;cpuset=[0]) 01:02:36.624 EAL: Trying to obtain current memory policy. 01:02:36.624 EAL: Setting policy MPOL_PREFERRED for socket 0 01:02:36.624 EAL: Restoring previous memory policy: 0 01:02:36.624 EAL: request: mp_malloc_sync 01:02:36.624 EAL: No shared files mode enabled, IPC is disabled 01:02:36.624 EAL: Heap on socket 0 was expanded by 2MB 01:02:36.624 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 01:02:36.624 EAL: No shared files mode enabled, IPC is disabled 01:02:36.624 EAL: No PCI address specified using 'addr=' in: bus=pci 01:02:36.624 EAL: Mem event callback 'spdk:(nil)' registered 01:02:36.624 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 01:02:36.624 01:02:36.624 01:02:36.624 CUnit - A unit testing framework for C - Version 2.1-3 01:02:36.624 http://cunit.sourceforge.net/ 01:02:36.624 01:02:36.624 01:02:36.624 Suite: components_suite 01:02:36.624 Test: vtophys_malloc_test ...passed 01:02:36.624 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 01:02:36.624 EAL: Setting policy MPOL_PREFERRED for socket 0 01:02:36.624 EAL: Restoring previous memory policy: 4 01:02:36.624 EAL: Calling mem event callback 'spdk:(nil)' 01:02:36.624 EAL: request: mp_malloc_sync 01:02:36.624 EAL: No shared files mode enabled, IPC is disabled 01:02:36.624 EAL: Heap on socket 0 was expanded by 4MB 01:02:36.624 EAL: Calling mem event callback 'spdk:(nil)' 01:02:36.624 EAL: request: mp_malloc_sync 01:02:36.624 EAL: No shared files mode enabled, IPC is disabled 01:02:36.624 EAL: Heap on socket 0 was shrunk by 4MB 01:02:36.624 EAL: Trying to obtain current memory policy. 01:02:36.624 EAL: Setting policy MPOL_PREFERRED for socket 0 01:02:36.624 EAL: Restoring previous memory policy: 4 01:02:36.624 EAL: Calling mem event callback 'spdk:(nil)' 01:02:36.624 EAL: request: mp_malloc_sync 01:02:36.624 EAL: No shared files mode enabled, IPC is disabled 01:02:36.624 EAL: Heap on socket 0 was expanded by 6MB 01:02:36.624 EAL: Calling mem event callback 'spdk:(nil)' 01:02:36.624 EAL: request: mp_malloc_sync 01:02:36.624 EAL: No shared files mode enabled, IPC is disabled 01:02:36.624 EAL: Heap on socket 0 was shrunk by 6MB 01:02:36.624 EAL: Trying to obtain current memory policy. 01:02:36.624 EAL: Setting policy MPOL_PREFERRED for socket 0 01:02:36.624 EAL: Restoring previous memory policy: 4 01:02:36.624 EAL: Calling mem event callback 'spdk:(nil)' 01:02:36.624 EAL: request: mp_malloc_sync 01:02:36.624 EAL: No shared files mode enabled, IPC is disabled 01:02:36.624 EAL: Heap on socket 0 was expanded by 10MB 01:02:36.624 EAL: Calling mem event callback 'spdk:(nil)' 01:02:36.624 EAL: request: mp_malloc_sync 01:02:36.624 EAL: No shared files mode enabled, IPC is disabled 01:02:36.624 EAL: Heap on socket 0 was shrunk by 10MB 01:02:36.624 EAL: Trying to obtain current memory policy. 01:02:36.624 EAL: Setting policy MPOL_PREFERRED for socket 0 01:02:36.624 EAL: Restoring previous memory policy: 4 01:02:36.624 EAL: Calling mem event callback 'spdk:(nil)' 01:02:36.624 EAL: request: mp_malloc_sync 01:02:36.624 EAL: No shared files mode enabled, IPC is disabled 01:02:36.624 EAL: Heap on socket 0 was expanded by 18MB 01:02:36.624 EAL: Calling mem event callback 'spdk:(nil)' 01:02:36.624 EAL: request: mp_malloc_sync 01:02:36.624 EAL: No shared files mode enabled, IPC is disabled 01:02:36.624 EAL: Heap on socket 0 was shrunk by 18MB 01:02:36.624 EAL: Trying to obtain current memory policy. 01:02:36.624 EAL: Setting policy MPOL_PREFERRED for socket 0 01:02:36.624 EAL: Restoring previous memory policy: 4 01:02:36.624 EAL: Calling mem event callback 'spdk:(nil)' 01:02:36.624 EAL: request: mp_malloc_sync 01:02:36.624 EAL: No shared files mode enabled, IPC is disabled 01:02:36.624 EAL: Heap on socket 0 was expanded by 34MB 01:02:36.624 EAL: Calling mem event callback 'spdk:(nil)' 01:02:36.624 EAL: request: mp_malloc_sync 01:02:36.624 EAL: No shared files mode enabled, IPC is disabled 01:02:36.625 EAL: Heap on socket 0 was shrunk by 34MB 01:02:36.625 EAL: Trying to obtain current memory policy. 01:02:36.625 EAL: Setting policy MPOL_PREFERRED for socket 0 01:02:36.625 EAL: Restoring previous memory policy: 4 01:02:36.625 EAL: Calling mem event callback 'spdk:(nil)' 01:02:36.625 EAL: request: mp_malloc_sync 01:02:36.625 EAL: No shared files mode enabled, IPC is disabled 01:02:36.625 EAL: Heap on socket 0 was expanded by 66MB 01:02:36.625 EAL: Calling mem event callback 'spdk:(nil)' 01:02:36.625 EAL: request: mp_malloc_sync 01:02:36.625 EAL: No shared files mode enabled, IPC is disabled 01:02:36.625 EAL: Heap on socket 0 was shrunk by 66MB 01:02:36.625 EAL: Trying to obtain current memory policy. 01:02:36.625 EAL: Setting policy MPOL_PREFERRED for socket 0 01:02:36.625 EAL: Restoring previous memory policy: 4 01:02:36.625 EAL: Calling mem event callback 'spdk:(nil)' 01:02:36.625 EAL: request: mp_malloc_sync 01:02:36.625 EAL: No shared files mode enabled, IPC is disabled 01:02:36.625 EAL: Heap on socket 0 was expanded by 130MB 01:02:36.625 EAL: Calling mem event callback 'spdk:(nil)' 01:02:36.882 EAL: request: mp_malloc_sync 01:02:36.882 EAL: No shared files mode enabled, IPC is disabled 01:02:36.882 EAL: Heap on socket 0 was shrunk by 130MB 01:02:36.882 EAL: Trying to obtain current memory policy. 01:02:36.882 EAL: Setting policy MPOL_PREFERRED for socket 0 01:02:36.882 EAL: Restoring previous memory policy: 4 01:02:36.882 EAL: Calling mem event callback 'spdk:(nil)' 01:02:36.882 EAL: request: mp_malloc_sync 01:02:36.882 EAL: No shared files mode enabled, IPC is disabled 01:02:36.882 EAL: Heap on socket 0 was expanded by 258MB 01:02:36.882 EAL: Calling mem event callback 'spdk:(nil)' 01:02:36.882 EAL: request: mp_malloc_sync 01:02:36.882 EAL: No shared files mode enabled, IPC is disabled 01:02:36.882 EAL: Heap on socket 0 was shrunk by 258MB 01:02:36.882 EAL: Trying to obtain current memory policy. 01:02:36.882 EAL: Setting policy MPOL_PREFERRED for socket 0 01:02:36.882 EAL: Restoring previous memory policy: 4 01:02:36.882 EAL: Calling mem event callback 'spdk:(nil)' 01:02:36.882 EAL: request: mp_malloc_sync 01:02:36.882 EAL: No shared files mode enabled, IPC is disabled 01:02:36.882 EAL: Heap on socket 0 was expanded by 514MB 01:02:37.140 EAL: Calling mem event callback 'spdk:(nil)' 01:02:37.140 EAL: request: mp_malloc_sync 01:02:37.140 EAL: No shared files mode enabled, IPC is disabled 01:02:37.140 EAL: Heap on socket 0 was shrunk by 514MB 01:02:37.140 EAL: Trying to obtain current memory policy. 01:02:37.140 EAL: Setting policy MPOL_PREFERRED for socket 0 01:02:37.400 EAL: Restoring previous memory policy: 4 01:02:37.400 EAL: Calling mem event callback 'spdk:(nil)' 01:02:37.400 EAL: request: mp_malloc_sync 01:02:37.400 EAL: No shared files mode enabled, IPC is disabled 01:02:37.400 EAL: Heap on socket 0 was expanded by 1026MB 01:02:37.400 EAL: Calling mem event callback 'spdk:(nil)' 01:02:37.658 passed 01:02:37.658 01:02:37.658 Run Summary: Type Total Ran Passed Failed Inactive 01:02:37.658 suites 1 1 n/a 0 0 01:02:37.658 tests 2 2 2 0 0 01:02:37.658 asserts 5337 5337 5337 0 n/a 01:02:37.658 01:02:37.658 Elapsed time = 0.982 seconds 01:02:37.658 EAL: request: mp_malloc_sync 01:02:37.658 EAL: No shared files mode enabled, IPC is disabled 01:02:37.658 EAL: Heap on socket 0 was shrunk by 1026MB 01:02:37.658 EAL: Calling mem event callback 'spdk:(nil)' 01:02:37.658 EAL: request: mp_malloc_sync 01:02:37.658 EAL: No shared files mode enabled, IPC is disabled 01:02:37.658 EAL: Heap on socket 0 was shrunk by 2MB 01:02:37.658 EAL: No shared files mode enabled, IPC is disabled 01:02:37.658 EAL: No shared files mode enabled, IPC is disabled 01:02:37.658 EAL: No shared files mode enabled, IPC is disabled 01:02:37.658 ************************************ 01:02:37.658 END TEST env_vtophys 01:02:37.658 ************************************ 01:02:37.658 01:02:37.658 real 0m1.189s 01:02:37.658 user 0m0.643s 01:02:37.658 sys 0m0.415s 01:02:37.658 10:59:42 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:37.658 10:59:42 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 01:02:37.658 10:59:42 env -- common/autotest_common.sh@1142 -- # return 0 01:02:37.658 10:59:42 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 01:02:37.658 10:59:42 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:37.658 10:59:42 env -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:37.658 10:59:42 env -- common/autotest_common.sh@10 -- # set +x 01:02:37.658 ************************************ 01:02:37.658 START TEST env_pci 01:02:37.658 ************************************ 01:02:37.658 10:59:42 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 01:02:37.658 01:02:37.658 01:02:37.658 CUnit - A unit testing framework for C - Version 2.1-3 01:02:37.658 http://cunit.sourceforge.net/ 01:02:37.658 01:02:37.658 01:02:37.658 Suite: pci 01:02:37.658 Test: pci_hook ...[2024-07-22 10:59:42.822285] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 71164 has claimed it 01:02:37.658 passed 01:02:37.658 01:02:37.658 Run Summary: Type Total Ran Passed Failed Inactive 01:02:37.658 suites 1 1 n/a 0 0 01:02:37.658 tests 1 1 1 0 0 01:02:37.658 asserts 25 25 25 0 n/a 01:02:37.658 01:02:37.658 Elapsed time = 0.003 seconds 01:02:37.658 EAL: Cannot find device (10000:00:01.0) 01:02:37.658 EAL: Failed to attach device on primary process 01:02:37.658 01:02:37.658 real 0m0.030s 01:02:37.658 user 0m0.014s 01:02:37.658 sys 0m0.015s 01:02:37.658 10:59:42 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:37.658 10:59:42 env.env_pci -- common/autotest_common.sh@10 -- # set +x 01:02:37.658 ************************************ 01:02:37.658 END TEST env_pci 01:02:37.658 ************************************ 01:02:37.917 10:59:42 env -- common/autotest_common.sh@1142 -- # return 0 01:02:37.917 10:59:42 env -- env/env.sh@14 -- # argv='-c 0x1 ' 01:02:37.917 10:59:42 env -- env/env.sh@15 -- # uname 01:02:37.917 10:59:42 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 01:02:37.917 10:59:42 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 01:02:37.917 10:59:42 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 01:02:37.917 10:59:42 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 01:02:37.917 10:59:42 env -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:37.917 10:59:42 env -- common/autotest_common.sh@10 -- # set +x 01:02:37.917 ************************************ 01:02:37.917 START TEST env_dpdk_post_init 01:02:37.917 ************************************ 01:02:37.917 10:59:42 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 01:02:37.917 EAL: Detected CPU lcores: 10 01:02:37.917 EAL: Detected NUMA nodes: 1 01:02:37.917 EAL: Detected shared linkage of DPDK 01:02:37.917 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 01:02:37.917 EAL: Selected IOVA mode 'PA' 01:02:37.917 TELEMETRY: No legacy callbacks, legacy socket not created 01:02:37.917 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 01:02:37.917 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 01:02:37.917 Starting DPDK initialization... 01:02:37.917 Starting SPDK post initialization... 01:02:37.917 SPDK NVMe probe 01:02:37.917 Attaching to 0000:00:10.0 01:02:37.917 Attaching to 0000:00:11.0 01:02:37.917 Attached to 0000:00:10.0 01:02:37.917 Attached to 0000:00:11.0 01:02:37.917 Cleaning up... 01:02:37.917 ************************************ 01:02:37.917 END TEST env_dpdk_post_init 01:02:37.917 ************************************ 01:02:37.917 01:02:37.917 real 0m0.188s 01:02:37.917 user 0m0.048s 01:02:37.917 sys 0m0.041s 01:02:37.917 10:59:43 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:37.917 10:59:43 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 01:02:38.175 10:59:43 env -- common/autotest_common.sh@1142 -- # return 0 01:02:38.175 10:59:43 env -- env/env.sh@26 -- # uname 01:02:38.175 10:59:43 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 01:02:38.175 10:59:43 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 01:02:38.175 10:59:43 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:38.175 10:59:43 env -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:38.175 10:59:43 env -- common/autotest_common.sh@10 -- # set +x 01:02:38.175 ************************************ 01:02:38.175 START TEST env_mem_callbacks 01:02:38.175 ************************************ 01:02:38.175 10:59:43 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 01:02:38.175 EAL: Detected CPU lcores: 10 01:02:38.175 EAL: Detected NUMA nodes: 1 01:02:38.175 EAL: Detected shared linkage of DPDK 01:02:38.175 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 01:02:38.175 EAL: Selected IOVA mode 'PA' 01:02:38.175 TELEMETRY: No legacy callbacks, legacy socket not created 01:02:38.175 01:02:38.175 01:02:38.175 CUnit - A unit testing framework for C - Version 2.1-3 01:02:38.175 http://cunit.sourceforge.net/ 01:02:38.175 01:02:38.175 01:02:38.175 Suite: memory 01:02:38.175 Test: test ... 01:02:38.175 register 0x200000200000 2097152 01:02:38.175 malloc 3145728 01:02:38.175 register 0x200000400000 4194304 01:02:38.175 buf 0x200000500000 len 3145728 PASSED 01:02:38.175 malloc 64 01:02:38.175 buf 0x2000004fff40 len 64 PASSED 01:02:38.175 malloc 4194304 01:02:38.175 register 0x200000800000 6291456 01:02:38.175 buf 0x200000a00000 len 4194304 PASSED 01:02:38.175 free 0x200000500000 3145728 01:02:38.175 free 0x2000004fff40 64 01:02:38.175 unregister 0x200000400000 4194304 PASSED 01:02:38.175 free 0x200000a00000 4194304 01:02:38.175 unregister 0x200000800000 6291456 PASSED 01:02:38.175 malloc 8388608 01:02:38.175 register 0x200000400000 10485760 01:02:38.175 buf 0x200000600000 len 8388608 PASSED 01:02:38.175 free 0x200000600000 8388608 01:02:38.175 unregister 0x200000400000 10485760 PASSED 01:02:38.175 passed 01:02:38.175 01:02:38.175 Run Summary: Type Total Ran Passed Failed Inactive 01:02:38.175 suites 1 1 n/a 0 0 01:02:38.175 tests 1 1 1 0 0 01:02:38.175 asserts 15 15 15 0 n/a 01:02:38.175 01:02:38.175 Elapsed time = 0.010 seconds 01:02:38.175 ************************************ 01:02:38.175 END TEST env_mem_callbacks 01:02:38.175 ************************************ 01:02:38.175 01:02:38.175 real 0m0.158s 01:02:38.175 user 0m0.024s 01:02:38.175 sys 0m0.031s 01:02:38.175 10:59:43 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:38.175 10:59:43 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 01:02:38.175 10:59:43 env -- common/autotest_common.sh@1142 -- # return 0 01:02:38.175 ************************************ 01:02:38.175 END TEST env 01:02:38.175 ************************************ 01:02:38.175 01:02:38.175 real 0m2.213s 01:02:38.175 user 0m1.068s 01:02:38.175 sys 0m0.814s 01:02:38.175 10:59:43 env -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:38.175 10:59:43 env -- common/autotest_common.sh@10 -- # set +x 01:02:38.432 10:59:43 -- common/autotest_common.sh@1142 -- # return 0 01:02:38.432 10:59:43 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 01:02:38.432 10:59:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:38.432 10:59:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:38.432 10:59:43 -- common/autotest_common.sh@10 -- # set +x 01:02:38.432 ************************************ 01:02:38.432 START TEST rpc 01:02:38.432 ************************************ 01:02:38.432 10:59:43 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 01:02:38.432 * Looking for test storage... 01:02:38.432 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 01:02:38.432 10:59:43 rpc -- rpc/rpc.sh@65 -- # spdk_pid=71274 01:02:38.432 10:59:43 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 01:02:38.432 10:59:43 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 01:02:38.432 10:59:43 rpc -- rpc/rpc.sh@67 -- # waitforlisten 71274 01:02:38.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:02:38.433 10:59:43 rpc -- common/autotest_common.sh@829 -- # '[' -z 71274 ']' 01:02:38.433 10:59:43 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:02:38.433 10:59:43 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 01:02:38.433 10:59:43 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:02:38.433 10:59:43 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 01:02:38.433 10:59:43 rpc -- common/autotest_common.sh@10 -- # set +x 01:02:38.433 [2024-07-22 10:59:43.626337] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:02:38.433 [2024-07-22 10:59:43.626412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71274 ] 01:02:38.691 [2024-07-22 10:59:43.766547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:38.691 [2024-07-22 10:59:43.815500] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 01:02:38.691 [2024-07-22 10:59:43.815559] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 71274' to capture a snapshot of events at runtime. 01:02:38.691 [2024-07-22 10:59:43.815568] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:02:38.691 [2024-07-22 10:59:43.815577] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:02:38.691 [2024-07-22 10:59:43.815583] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid71274 for offline analysis/debug. 01:02:38.691 [2024-07-22 10:59:43.815617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:02:38.691 [2024-07-22 10:59:43.857825] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:02:39.625 10:59:44 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:02:39.625 10:59:44 rpc -- common/autotest_common.sh@862 -- # return 0 01:02:39.625 10:59:44 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 01:02:39.625 10:59:44 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 01:02:39.625 10:59:44 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 01:02:39.625 10:59:44 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 01:02:39.625 10:59:44 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:39.625 10:59:44 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:39.625 10:59:44 rpc -- common/autotest_common.sh@10 -- # set +x 01:02:39.625 ************************************ 01:02:39.625 START TEST rpc_integrity 01:02:39.625 ************************************ 01:02:39.625 10:59:44 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 01:02:39.625 10:59:44 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 01:02:39.625 10:59:44 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:39.625 10:59:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:02:39.625 10:59:44 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:39.625 10:59:44 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 01:02:39.625 10:59:44 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 01:02:39.625 10:59:44 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 01:02:39.625 10:59:44 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 01:02:39.625 10:59:44 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:39.625 10:59:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:02:39.625 10:59:44 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:39.625 10:59:44 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 01:02:39.625 10:59:44 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 01:02:39.625 10:59:44 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:39.625 10:59:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:02:39.625 10:59:44 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:39.625 10:59:44 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 01:02:39.625 { 01:02:39.625 "name": "Malloc0", 01:02:39.625 "aliases": [ 01:02:39.625 "9accf2b7-4cc6-420c-99cc-96769f1482bf" 01:02:39.625 ], 01:02:39.625 "product_name": "Malloc disk", 01:02:39.625 "block_size": 512, 01:02:39.625 "num_blocks": 16384, 01:02:39.625 "uuid": "9accf2b7-4cc6-420c-99cc-96769f1482bf", 01:02:39.625 "assigned_rate_limits": { 01:02:39.625 "rw_ios_per_sec": 0, 01:02:39.625 "rw_mbytes_per_sec": 0, 01:02:39.625 "r_mbytes_per_sec": 0, 01:02:39.625 "w_mbytes_per_sec": 0 01:02:39.625 }, 01:02:39.625 "claimed": false, 01:02:39.625 "zoned": false, 01:02:39.625 "supported_io_types": { 01:02:39.625 "read": true, 01:02:39.625 "write": true, 01:02:39.625 "unmap": true, 01:02:39.625 "flush": true, 01:02:39.625 "reset": true, 01:02:39.625 "nvme_admin": false, 01:02:39.625 "nvme_io": false, 01:02:39.625 "nvme_io_md": false, 01:02:39.625 "write_zeroes": true, 01:02:39.625 "zcopy": true, 01:02:39.625 "get_zone_info": false, 01:02:39.625 "zone_management": false, 01:02:39.625 "zone_append": false, 01:02:39.625 "compare": false, 01:02:39.625 "compare_and_write": false, 01:02:39.625 "abort": true, 01:02:39.625 "seek_hole": false, 01:02:39.625 "seek_data": false, 01:02:39.625 "copy": true, 01:02:39.625 "nvme_iov_md": false 01:02:39.625 }, 01:02:39.625 "memory_domains": [ 01:02:39.625 { 01:02:39.625 "dma_device_id": "system", 01:02:39.625 "dma_device_type": 1 01:02:39.625 }, 01:02:39.625 { 01:02:39.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:02:39.626 "dma_device_type": 2 01:02:39.626 } 01:02:39.626 ], 01:02:39.626 "driver_specific": {} 01:02:39.626 } 01:02:39.626 ]' 01:02:39.626 10:59:44 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 01:02:39.626 10:59:44 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 01:02:39.626 10:59:44 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 01:02:39.626 10:59:44 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:39.626 10:59:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:02:39.626 [2024-07-22 10:59:44.680492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 01:02:39.626 [2024-07-22 10:59:44.680551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 01:02:39.626 [2024-07-22 10:59:44.680568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fa2a10 01:02:39.626 [2024-07-22 10:59:44.680578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 01:02:39.626 [2024-07-22 10:59:44.682039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:02:39.626 [2024-07-22 10:59:44.682068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 01:02:39.626 Passthru0 01:02:39.626 10:59:44 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:39.626 10:59:44 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 01:02:39.626 10:59:44 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:39.626 10:59:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:02:39.626 10:59:44 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:39.626 10:59:44 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 01:02:39.626 { 01:02:39.626 "name": "Malloc0", 01:02:39.626 "aliases": [ 01:02:39.626 "9accf2b7-4cc6-420c-99cc-96769f1482bf" 01:02:39.626 ], 01:02:39.626 "product_name": "Malloc disk", 01:02:39.626 "block_size": 512, 01:02:39.626 "num_blocks": 16384, 01:02:39.626 "uuid": "9accf2b7-4cc6-420c-99cc-96769f1482bf", 01:02:39.626 "assigned_rate_limits": { 01:02:39.626 "rw_ios_per_sec": 0, 01:02:39.626 "rw_mbytes_per_sec": 0, 01:02:39.626 "r_mbytes_per_sec": 0, 01:02:39.626 "w_mbytes_per_sec": 0 01:02:39.626 }, 01:02:39.626 "claimed": true, 01:02:39.626 "claim_type": "exclusive_write", 01:02:39.626 "zoned": false, 01:02:39.626 "supported_io_types": { 01:02:39.626 "read": true, 01:02:39.626 "write": true, 01:02:39.626 "unmap": true, 01:02:39.626 "flush": true, 01:02:39.626 "reset": true, 01:02:39.626 "nvme_admin": false, 01:02:39.626 "nvme_io": false, 01:02:39.626 "nvme_io_md": false, 01:02:39.626 "write_zeroes": true, 01:02:39.626 "zcopy": true, 01:02:39.626 "get_zone_info": false, 01:02:39.626 "zone_management": false, 01:02:39.626 "zone_append": false, 01:02:39.626 "compare": false, 01:02:39.626 "compare_and_write": false, 01:02:39.626 "abort": true, 01:02:39.626 "seek_hole": false, 01:02:39.626 "seek_data": false, 01:02:39.626 "copy": true, 01:02:39.626 "nvme_iov_md": false 01:02:39.626 }, 01:02:39.626 "memory_domains": [ 01:02:39.626 { 01:02:39.626 "dma_device_id": "system", 01:02:39.626 "dma_device_type": 1 01:02:39.626 }, 01:02:39.626 { 01:02:39.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:02:39.626 "dma_device_type": 2 01:02:39.626 } 01:02:39.626 ], 01:02:39.626 "driver_specific": {} 01:02:39.626 }, 01:02:39.626 { 01:02:39.626 "name": "Passthru0", 01:02:39.626 "aliases": [ 01:02:39.626 "81f05b9b-4884-562a-bda3-cc153bb1a10e" 01:02:39.626 ], 01:02:39.626 "product_name": "passthru", 01:02:39.626 "block_size": 512, 01:02:39.626 "num_blocks": 16384, 01:02:39.626 "uuid": "81f05b9b-4884-562a-bda3-cc153bb1a10e", 01:02:39.626 "assigned_rate_limits": { 01:02:39.626 "rw_ios_per_sec": 0, 01:02:39.626 "rw_mbytes_per_sec": 0, 01:02:39.626 "r_mbytes_per_sec": 0, 01:02:39.626 "w_mbytes_per_sec": 0 01:02:39.626 }, 01:02:39.626 "claimed": false, 01:02:39.626 "zoned": false, 01:02:39.626 "supported_io_types": { 01:02:39.626 "read": true, 01:02:39.626 "write": true, 01:02:39.626 "unmap": true, 01:02:39.626 "flush": true, 01:02:39.626 "reset": true, 01:02:39.626 "nvme_admin": false, 01:02:39.626 "nvme_io": false, 01:02:39.626 "nvme_io_md": false, 01:02:39.626 "write_zeroes": true, 01:02:39.626 "zcopy": true, 01:02:39.626 "get_zone_info": false, 01:02:39.626 "zone_management": false, 01:02:39.626 "zone_append": false, 01:02:39.626 "compare": false, 01:02:39.626 "compare_and_write": false, 01:02:39.626 "abort": true, 01:02:39.626 "seek_hole": false, 01:02:39.626 "seek_data": false, 01:02:39.626 "copy": true, 01:02:39.626 "nvme_iov_md": false 01:02:39.626 }, 01:02:39.626 "memory_domains": [ 01:02:39.626 { 01:02:39.626 "dma_device_id": "system", 01:02:39.626 "dma_device_type": 1 01:02:39.626 }, 01:02:39.626 { 01:02:39.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:02:39.626 "dma_device_type": 2 01:02:39.626 } 01:02:39.626 ], 01:02:39.626 "driver_specific": { 01:02:39.626 "passthru": { 01:02:39.626 "name": "Passthru0", 01:02:39.626 "base_bdev_name": "Malloc0" 01:02:39.626 } 01:02:39.626 } 01:02:39.626 } 01:02:39.626 ]' 01:02:39.626 10:59:44 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 01:02:39.626 10:59:44 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 01:02:39.626 10:59:44 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 01:02:39.626 10:59:44 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:39.626 10:59:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:02:39.626 10:59:44 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:39.626 10:59:44 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 01:02:39.626 10:59:44 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:39.626 10:59:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:02:39.626 10:59:44 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:39.626 10:59:44 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 01:02:39.626 10:59:44 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:39.626 10:59:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:02:39.626 10:59:44 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:39.626 10:59:44 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 01:02:39.626 10:59:44 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 01:02:39.884 ************************************ 01:02:39.884 END TEST rpc_integrity 01:02:39.884 ************************************ 01:02:39.884 10:59:44 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 01:02:39.884 01:02:39.884 real 0m0.332s 01:02:39.884 user 0m0.204s 01:02:39.884 sys 0m0.051s 01:02:39.884 10:59:44 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:39.884 10:59:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:02:39.884 10:59:44 rpc -- common/autotest_common.sh@1142 -- # return 0 01:02:39.884 10:59:44 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 01:02:39.884 10:59:44 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:39.884 10:59:44 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:39.884 10:59:44 rpc -- common/autotest_common.sh@10 -- # set +x 01:02:39.884 ************************************ 01:02:39.884 START TEST rpc_plugins 01:02:39.884 ************************************ 01:02:39.884 10:59:44 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 01:02:39.884 10:59:44 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 01:02:39.884 10:59:44 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:39.884 10:59:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:02:39.884 10:59:44 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:39.884 10:59:44 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 01:02:39.884 10:59:44 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 01:02:39.884 10:59:44 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:39.884 10:59:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:02:39.884 10:59:44 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:39.884 10:59:44 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 01:02:39.884 { 01:02:39.884 "name": "Malloc1", 01:02:39.884 "aliases": [ 01:02:39.884 "5f61b027-a553-487b-842d-fcba75b84c24" 01:02:39.884 ], 01:02:39.884 "product_name": "Malloc disk", 01:02:39.884 "block_size": 4096, 01:02:39.884 "num_blocks": 256, 01:02:39.884 "uuid": "5f61b027-a553-487b-842d-fcba75b84c24", 01:02:39.884 "assigned_rate_limits": { 01:02:39.884 "rw_ios_per_sec": 0, 01:02:39.884 "rw_mbytes_per_sec": 0, 01:02:39.884 "r_mbytes_per_sec": 0, 01:02:39.884 "w_mbytes_per_sec": 0 01:02:39.884 }, 01:02:39.884 "claimed": false, 01:02:39.884 "zoned": false, 01:02:39.884 "supported_io_types": { 01:02:39.884 "read": true, 01:02:39.884 "write": true, 01:02:39.884 "unmap": true, 01:02:39.884 "flush": true, 01:02:39.884 "reset": true, 01:02:39.884 "nvme_admin": false, 01:02:39.884 "nvme_io": false, 01:02:39.884 "nvme_io_md": false, 01:02:39.884 "write_zeroes": true, 01:02:39.884 "zcopy": true, 01:02:39.884 "get_zone_info": false, 01:02:39.884 "zone_management": false, 01:02:39.884 "zone_append": false, 01:02:39.884 "compare": false, 01:02:39.884 "compare_and_write": false, 01:02:39.884 "abort": true, 01:02:39.884 "seek_hole": false, 01:02:39.884 "seek_data": false, 01:02:39.884 "copy": true, 01:02:39.884 "nvme_iov_md": false 01:02:39.884 }, 01:02:39.884 "memory_domains": [ 01:02:39.884 { 01:02:39.884 "dma_device_id": "system", 01:02:39.884 "dma_device_type": 1 01:02:39.884 }, 01:02:39.884 { 01:02:39.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:02:39.884 "dma_device_type": 2 01:02:39.884 } 01:02:39.884 ], 01:02:39.884 "driver_specific": {} 01:02:39.884 } 01:02:39.884 ]' 01:02:39.884 10:59:44 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 01:02:39.884 10:59:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 01:02:39.884 10:59:45 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 01:02:39.884 10:59:45 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:39.884 10:59:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:02:39.884 10:59:45 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:39.884 10:59:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 01:02:39.884 10:59:45 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:39.884 10:59:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:02:39.884 10:59:45 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:39.884 10:59:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 01:02:39.884 10:59:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 01:02:40.143 ************************************ 01:02:40.143 END TEST rpc_plugins 01:02:40.143 ************************************ 01:02:40.143 10:59:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 01:02:40.143 01:02:40.143 real 0m0.169s 01:02:40.143 user 0m0.096s 01:02:40.143 sys 0m0.030s 01:02:40.143 10:59:45 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:40.143 10:59:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:02:40.143 10:59:45 rpc -- common/autotest_common.sh@1142 -- # return 0 01:02:40.143 10:59:45 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 01:02:40.143 10:59:45 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:40.143 10:59:45 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:40.143 10:59:45 rpc -- common/autotest_common.sh@10 -- # set +x 01:02:40.143 ************************************ 01:02:40.143 START TEST rpc_trace_cmd_test 01:02:40.143 ************************************ 01:02:40.143 10:59:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 01:02:40.143 10:59:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 01:02:40.143 10:59:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 01:02:40.143 10:59:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:40.143 10:59:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 01:02:40.143 10:59:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:40.143 10:59:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 01:02:40.143 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid71274", 01:02:40.143 "tpoint_group_mask": "0x8", 01:02:40.143 "iscsi_conn": { 01:02:40.143 "mask": "0x2", 01:02:40.143 "tpoint_mask": "0x0" 01:02:40.143 }, 01:02:40.143 "scsi": { 01:02:40.143 "mask": "0x4", 01:02:40.143 "tpoint_mask": "0x0" 01:02:40.143 }, 01:02:40.143 "bdev": { 01:02:40.143 "mask": "0x8", 01:02:40.143 "tpoint_mask": "0xffffffffffffffff" 01:02:40.143 }, 01:02:40.143 "nvmf_rdma": { 01:02:40.143 "mask": "0x10", 01:02:40.143 "tpoint_mask": "0x0" 01:02:40.143 }, 01:02:40.143 "nvmf_tcp": { 01:02:40.143 "mask": "0x20", 01:02:40.143 "tpoint_mask": "0x0" 01:02:40.143 }, 01:02:40.143 "ftl": { 01:02:40.143 "mask": "0x40", 01:02:40.143 "tpoint_mask": "0x0" 01:02:40.143 }, 01:02:40.143 "blobfs": { 01:02:40.143 "mask": "0x80", 01:02:40.143 "tpoint_mask": "0x0" 01:02:40.143 }, 01:02:40.143 "dsa": { 01:02:40.143 "mask": "0x200", 01:02:40.143 "tpoint_mask": "0x0" 01:02:40.143 }, 01:02:40.143 "thread": { 01:02:40.143 "mask": "0x400", 01:02:40.143 "tpoint_mask": "0x0" 01:02:40.143 }, 01:02:40.143 "nvme_pcie": { 01:02:40.143 "mask": "0x800", 01:02:40.143 "tpoint_mask": "0x0" 01:02:40.143 }, 01:02:40.143 "iaa": { 01:02:40.143 "mask": "0x1000", 01:02:40.143 "tpoint_mask": "0x0" 01:02:40.143 }, 01:02:40.143 "nvme_tcp": { 01:02:40.143 "mask": "0x2000", 01:02:40.143 "tpoint_mask": "0x0" 01:02:40.143 }, 01:02:40.143 "bdev_nvme": { 01:02:40.143 "mask": "0x4000", 01:02:40.143 "tpoint_mask": "0x0" 01:02:40.143 }, 01:02:40.143 "sock": { 01:02:40.143 "mask": "0x8000", 01:02:40.143 "tpoint_mask": "0x0" 01:02:40.143 } 01:02:40.143 }' 01:02:40.143 10:59:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 01:02:40.143 10:59:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 01:02:40.143 10:59:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 01:02:40.143 10:59:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 01:02:40.143 10:59:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 01:02:40.143 10:59:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 01:02:40.143 10:59:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 01:02:40.143 10:59:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 01:02:40.143 10:59:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 01:02:40.402 ************************************ 01:02:40.402 END TEST rpc_trace_cmd_test 01:02:40.402 ************************************ 01:02:40.402 10:59:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 01:02:40.402 01:02:40.402 real 0m0.234s 01:02:40.402 user 0m0.183s 01:02:40.402 sys 0m0.040s 01:02:40.402 10:59:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:40.402 10:59:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 01:02:40.402 10:59:45 rpc -- common/autotest_common.sh@1142 -- # return 0 01:02:40.402 10:59:45 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 01:02:40.402 10:59:45 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 01:02:40.402 10:59:45 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 01:02:40.402 10:59:45 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:40.402 10:59:45 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:40.402 10:59:45 rpc -- common/autotest_common.sh@10 -- # set +x 01:02:40.402 ************************************ 01:02:40.402 START TEST rpc_daemon_integrity 01:02:40.402 ************************************ 01:02:40.402 10:59:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 01:02:40.402 10:59:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 01:02:40.402 10:59:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:40.402 10:59:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:02:40.402 10:59:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:40.402 10:59:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 01:02:40.402 10:59:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 01:02:40.402 10:59:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 01:02:40.402 10:59:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 01:02:40.402 10:59:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:40.402 10:59:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:02:40.402 10:59:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:40.402 10:59:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 01:02:40.402 10:59:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 01:02:40.402 10:59:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:40.402 10:59:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:02:40.402 10:59:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:40.402 10:59:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 01:02:40.402 { 01:02:40.402 "name": "Malloc2", 01:02:40.402 "aliases": [ 01:02:40.402 "82ddd407-364e-4779-a33b-f95b3a6678c4" 01:02:40.402 ], 01:02:40.402 "product_name": "Malloc disk", 01:02:40.402 "block_size": 512, 01:02:40.402 "num_blocks": 16384, 01:02:40.402 "uuid": "82ddd407-364e-4779-a33b-f95b3a6678c4", 01:02:40.402 "assigned_rate_limits": { 01:02:40.402 "rw_ios_per_sec": 0, 01:02:40.402 "rw_mbytes_per_sec": 0, 01:02:40.402 "r_mbytes_per_sec": 0, 01:02:40.402 "w_mbytes_per_sec": 0 01:02:40.402 }, 01:02:40.402 "claimed": false, 01:02:40.402 "zoned": false, 01:02:40.402 "supported_io_types": { 01:02:40.402 "read": true, 01:02:40.402 "write": true, 01:02:40.402 "unmap": true, 01:02:40.402 "flush": true, 01:02:40.402 "reset": true, 01:02:40.402 "nvme_admin": false, 01:02:40.402 "nvme_io": false, 01:02:40.402 "nvme_io_md": false, 01:02:40.402 "write_zeroes": true, 01:02:40.402 "zcopy": true, 01:02:40.402 "get_zone_info": false, 01:02:40.402 "zone_management": false, 01:02:40.402 "zone_append": false, 01:02:40.402 "compare": false, 01:02:40.402 "compare_and_write": false, 01:02:40.402 "abort": true, 01:02:40.402 "seek_hole": false, 01:02:40.402 "seek_data": false, 01:02:40.402 "copy": true, 01:02:40.402 "nvme_iov_md": false 01:02:40.402 }, 01:02:40.402 "memory_domains": [ 01:02:40.402 { 01:02:40.402 "dma_device_id": "system", 01:02:40.402 "dma_device_type": 1 01:02:40.402 }, 01:02:40.402 { 01:02:40.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:02:40.402 "dma_device_type": 2 01:02:40.402 } 01:02:40.402 ], 01:02:40.402 "driver_specific": {} 01:02:40.402 } 01:02:40.402 ]' 01:02:40.402 10:59:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 01:02:40.402 10:59:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 01:02:40.402 10:59:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 01:02:40.402 10:59:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:40.402 10:59:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:02:40.661 [2024-07-22 10:59:45.611251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 01:02:40.661 [2024-07-22 10:59:45.611296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 01:02:40.661 [2024-07-22 10:59:45.611314] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f9b800 01:02:40.661 [2024-07-22 10:59:45.611322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 01:02:40.661 [2024-07-22 10:59:45.612567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:02:40.661 [2024-07-22 10:59:45.612595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 01:02:40.661 Passthru0 01:02:40.661 10:59:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:40.661 10:59:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 01:02:40.661 10:59:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:40.661 10:59:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:02:40.661 10:59:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:40.661 10:59:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 01:02:40.661 { 01:02:40.661 "name": "Malloc2", 01:02:40.661 "aliases": [ 01:02:40.661 "82ddd407-364e-4779-a33b-f95b3a6678c4" 01:02:40.661 ], 01:02:40.661 "product_name": "Malloc disk", 01:02:40.661 "block_size": 512, 01:02:40.661 "num_blocks": 16384, 01:02:40.661 "uuid": "82ddd407-364e-4779-a33b-f95b3a6678c4", 01:02:40.661 "assigned_rate_limits": { 01:02:40.661 "rw_ios_per_sec": 0, 01:02:40.661 "rw_mbytes_per_sec": 0, 01:02:40.661 "r_mbytes_per_sec": 0, 01:02:40.661 "w_mbytes_per_sec": 0 01:02:40.661 }, 01:02:40.661 "claimed": true, 01:02:40.661 "claim_type": "exclusive_write", 01:02:40.661 "zoned": false, 01:02:40.661 "supported_io_types": { 01:02:40.661 "read": true, 01:02:40.661 "write": true, 01:02:40.661 "unmap": true, 01:02:40.661 "flush": true, 01:02:40.661 "reset": true, 01:02:40.661 "nvme_admin": false, 01:02:40.661 "nvme_io": false, 01:02:40.661 "nvme_io_md": false, 01:02:40.661 "write_zeroes": true, 01:02:40.661 "zcopy": true, 01:02:40.661 "get_zone_info": false, 01:02:40.661 "zone_management": false, 01:02:40.661 "zone_append": false, 01:02:40.661 "compare": false, 01:02:40.661 "compare_and_write": false, 01:02:40.661 "abort": true, 01:02:40.661 "seek_hole": false, 01:02:40.661 "seek_data": false, 01:02:40.661 "copy": true, 01:02:40.661 "nvme_iov_md": false 01:02:40.661 }, 01:02:40.661 "memory_domains": [ 01:02:40.661 { 01:02:40.661 "dma_device_id": "system", 01:02:40.661 "dma_device_type": 1 01:02:40.661 }, 01:02:40.661 { 01:02:40.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:02:40.661 "dma_device_type": 2 01:02:40.661 } 01:02:40.661 ], 01:02:40.661 "driver_specific": {} 01:02:40.661 }, 01:02:40.661 { 01:02:40.661 "name": "Passthru0", 01:02:40.661 "aliases": [ 01:02:40.661 "50627f67-5e6b-5927-a322-a05ea8b6d077" 01:02:40.661 ], 01:02:40.661 "product_name": "passthru", 01:02:40.661 "block_size": 512, 01:02:40.661 "num_blocks": 16384, 01:02:40.661 "uuid": "50627f67-5e6b-5927-a322-a05ea8b6d077", 01:02:40.661 "assigned_rate_limits": { 01:02:40.661 "rw_ios_per_sec": 0, 01:02:40.661 "rw_mbytes_per_sec": 0, 01:02:40.661 "r_mbytes_per_sec": 0, 01:02:40.661 "w_mbytes_per_sec": 0 01:02:40.661 }, 01:02:40.661 "claimed": false, 01:02:40.661 "zoned": false, 01:02:40.661 "supported_io_types": { 01:02:40.661 "read": true, 01:02:40.661 "write": true, 01:02:40.661 "unmap": true, 01:02:40.661 "flush": true, 01:02:40.661 "reset": true, 01:02:40.661 "nvme_admin": false, 01:02:40.661 "nvme_io": false, 01:02:40.661 "nvme_io_md": false, 01:02:40.661 "write_zeroes": true, 01:02:40.661 "zcopy": true, 01:02:40.661 "get_zone_info": false, 01:02:40.661 "zone_management": false, 01:02:40.661 "zone_append": false, 01:02:40.661 "compare": false, 01:02:40.661 "compare_and_write": false, 01:02:40.661 "abort": true, 01:02:40.661 "seek_hole": false, 01:02:40.661 "seek_data": false, 01:02:40.661 "copy": true, 01:02:40.661 "nvme_iov_md": false 01:02:40.661 }, 01:02:40.661 "memory_domains": [ 01:02:40.661 { 01:02:40.661 "dma_device_id": "system", 01:02:40.661 "dma_device_type": 1 01:02:40.661 }, 01:02:40.661 { 01:02:40.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:02:40.661 "dma_device_type": 2 01:02:40.661 } 01:02:40.661 ], 01:02:40.661 "driver_specific": { 01:02:40.661 "passthru": { 01:02:40.661 "name": "Passthru0", 01:02:40.661 "base_bdev_name": "Malloc2" 01:02:40.661 } 01:02:40.661 } 01:02:40.661 } 01:02:40.661 ]' 01:02:40.661 10:59:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 01:02:40.661 10:59:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 01:02:40.661 10:59:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 01:02:40.661 10:59:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:40.661 10:59:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:02:40.661 10:59:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:40.661 10:59:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 01:02:40.661 10:59:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:40.661 10:59:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:02:40.661 10:59:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:40.661 10:59:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 01:02:40.661 10:59:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:40.661 10:59:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:02:40.661 10:59:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:40.661 10:59:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 01:02:40.661 10:59:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 01:02:40.661 ************************************ 01:02:40.661 END TEST rpc_daemon_integrity 01:02:40.661 ************************************ 01:02:40.662 10:59:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 01:02:40.662 01:02:40.662 real 0m0.320s 01:02:40.662 user 0m0.179s 01:02:40.662 sys 0m0.070s 01:02:40.662 10:59:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:40.662 10:59:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:02:40.662 10:59:45 rpc -- common/autotest_common.sh@1142 -- # return 0 01:02:40.662 10:59:45 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 01:02:40.662 10:59:45 rpc -- rpc/rpc.sh@84 -- # killprocess 71274 01:02:40.662 10:59:45 rpc -- common/autotest_common.sh@948 -- # '[' -z 71274 ']' 01:02:40.662 10:59:45 rpc -- common/autotest_common.sh@952 -- # kill -0 71274 01:02:40.662 10:59:45 rpc -- common/autotest_common.sh@953 -- # uname 01:02:40.662 10:59:45 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:02:40.662 10:59:45 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71274 01:02:40.920 killing process with pid 71274 01:02:40.920 10:59:45 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:02:40.920 10:59:45 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:02:40.920 10:59:45 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71274' 01:02:40.920 10:59:45 rpc -- common/autotest_common.sh@967 -- # kill 71274 01:02:40.920 10:59:45 rpc -- common/autotest_common.sh@972 -- # wait 71274 01:02:41.203 01:02:41.203 real 0m2.744s 01:02:41.203 user 0m3.453s 01:02:41.203 sys 0m0.765s 01:02:41.203 ************************************ 01:02:41.203 END TEST rpc 01:02:41.203 ************************************ 01:02:41.203 10:59:46 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:41.203 10:59:46 rpc -- common/autotest_common.sh@10 -- # set +x 01:02:41.203 10:59:46 -- common/autotest_common.sh@1142 -- # return 0 01:02:41.203 10:59:46 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 01:02:41.203 10:59:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:41.203 10:59:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:41.203 10:59:46 -- common/autotest_common.sh@10 -- # set +x 01:02:41.203 ************************************ 01:02:41.203 START TEST skip_rpc 01:02:41.203 ************************************ 01:02:41.203 10:59:46 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 01:02:41.203 * Looking for test storage... 01:02:41.203 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 01:02:41.203 10:59:46 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 01:02:41.203 10:59:46 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 01:02:41.203 10:59:46 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 01:02:41.203 10:59:46 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:41.203 10:59:46 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:41.203 10:59:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:02:41.467 ************************************ 01:02:41.467 START TEST skip_rpc 01:02:41.467 ************************************ 01:02:41.467 10:59:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 01:02:41.467 10:59:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=71466 01:02:41.467 10:59:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 01:02:41.467 10:59:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 01:02:41.467 10:59:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 01:02:41.467 [2024-07-22 10:59:46.456328] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:02:41.467 [2024-07-22 10:59:46.456604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71466 ] 01:02:41.467 [2024-07-22 10:59:46.599411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:41.467 [2024-07-22 10:59:46.648526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:02:41.726 [2024-07-22 10:59:46.690512] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:02:46.988 10:59:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 01:02:46.988 10:59:51 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 01:02:46.988 10:59:51 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 01:02:46.988 10:59:51 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:02:46.988 10:59:51 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:02:46.988 10:59:51 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:02:46.988 10:59:51 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:02:46.988 10:59:51 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 01:02:46.988 10:59:51 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:46.988 10:59:51 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:02:46.988 10:59:51 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:02:46.988 10:59:51 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 01:02:46.988 10:59:51 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:02:46.988 10:59:51 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:02:46.988 10:59:51 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:02:46.988 10:59:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 01:02:46.988 10:59:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 71466 01:02:46.988 10:59:51 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 71466 ']' 01:02:46.988 10:59:51 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 71466 01:02:46.988 10:59:51 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 01:02:46.988 10:59:51 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:02:46.988 10:59:51 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71466 01:02:46.988 killing process with pid 71466 01:02:46.988 10:59:51 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:02:46.988 10:59:51 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:02:46.988 10:59:51 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71466' 01:02:46.988 10:59:51 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 71466 01:02:46.988 10:59:51 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 71466 01:02:46.988 01:02:46.988 real 0m5.370s 01:02:46.988 user 0m5.049s 01:02:46.988 sys 0m0.243s 01:02:46.988 10:59:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:46.988 10:59:51 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:02:46.988 ************************************ 01:02:46.988 END TEST skip_rpc 01:02:46.988 ************************************ 01:02:46.988 10:59:51 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 01:02:46.988 10:59:51 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 01:02:46.988 10:59:51 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:46.988 10:59:51 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:46.988 10:59:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:02:46.988 ************************************ 01:02:46.988 START TEST skip_rpc_with_json 01:02:46.988 ************************************ 01:02:46.988 10:59:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 01:02:46.988 10:59:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 01:02:46.988 10:59:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=71553 01:02:46.988 10:59:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 01:02:46.988 10:59:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:02:46.988 10:59:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 71553 01:02:46.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:02:46.988 10:59:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 71553 ']' 01:02:46.988 10:59:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:02:46.988 10:59:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 01:02:46.988 10:59:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:02:46.988 10:59:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 01:02:46.988 10:59:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:02:46.988 [2024-07-22 10:59:51.896926] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:02:46.988 [2024-07-22 10:59:51.897004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71553 ] 01:02:46.988 [2024-07-22 10:59:52.040679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:46.988 [2024-07-22 10:59:52.089768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:02:46.988 [2024-07-22 10:59:52.131930] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:02:47.555 10:59:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:02:47.555 10:59:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 01:02:47.555 10:59:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 01:02:47.555 10:59:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:47.555 10:59:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:02:47.555 [2024-07-22 10:59:52.746673] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 01:02:47.555 request: 01:02:47.555 { 01:02:47.555 "trtype": "tcp", 01:02:47.555 "method": "nvmf_get_transports", 01:02:47.555 "req_id": 1 01:02:47.555 } 01:02:47.555 Got JSON-RPC error response 01:02:47.555 response: 01:02:47.555 { 01:02:47.555 "code": -19, 01:02:47.555 "message": "No such device" 01:02:47.555 } 01:02:47.555 10:59:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:02:47.555 10:59:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 01:02:47.555 10:59:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:47.555 10:59:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:02:47.814 [2024-07-22 10:59:52.762771] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:02:47.814 10:59:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:47.814 10:59:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 01:02:47.814 10:59:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:47.814 10:59:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:02:47.814 10:59:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:47.814 10:59:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 01:02:47.814 { 01:02:47.814 "subsystems": [ 01:02:47.814 { 01:02:47.814 "subsystem": "keyring", 01:02:47.814 "config": [] 01:02:47.814 }, 01:02:47.814 { 01:02:47.814 "subsystem": "iobuf", 01:02:47.814 "config": [ 01:02:47.814 { 01:02:47.814 "method": "iobuf_set_options", 01:02:47.814 "params": { 01:02:47.814 "small_pool_count": 8192, 01:02:47.814 "large_pool_count": 1024, 01:02:47.814 "small_bufsize": 8192, 01:02:47.814 "large_bufsize": 135168 01:02:47.814 } 01:02:47.814 } 01:02:47.814 ] 01:02:47.814 }, 01:02:47.814 { 01:02:47.814 "subsystem": "sock", 01:02:47.814 "config": [ 01:02:47.814 { 01:02:47.814 "method": "sock_set_default_impl", 01:02:47.814 "params": { 01:02:47.814 "impl_name": "uring" 01:02:47.814 } 01:02:47.814 }, 01:02:47.814 { 01:02:47.814 "method": "sock_impl_set_options", 01:02:47.814 "params": { 01:02:47.814 "impl_name": "ssl", 01:02:47.814 "recv_buf_size": 4096, 01:02:47.814 "send_buf_size": 4096, 01:02:47.814 "enable_recv_pipe": true, 01:02:47.814 "enable_quickack": false, 01:02:47.814 "enable_placement_id": 0, 01:02:47.814 "enable_zerocopy_send_server": true, 01:02:47.814 "enable_zerocopy_send_client": false, 01:02:47.814 "zerocopy_threshold": 0, 01:02:47.814 "tls_version": 0, 01:02:47.814 "enable_ktls": false 01:02:47.814 } 01:02:47.814 }, 01:02:47.814 { 01:02:47.814 "method": "sock_impl_set_options", 01:02:47.814 "params": { 01:02:47.814 "impl_name": "posix", 01:02:47.814 "recv_buf_size": 2097152, 01:02:47.814 "send_buf_size": 2097152, 01:02:47.814 "enable_recv_pipe": true, 01:02:47.814 "enable_quickack": false, 01:02:47.814 "enable_placement_id": 0, 01:02:47.814 "enable_zerocopy_send_server": true, 01:02:47.814 "enable_zerocopy_send_client": false, 01:02:47.814 "zerocopy_threshold": 0, 01:02:47.814 "tls_version": 0, 01:02:47.814 "enable_ktls": false 01:02:47.814 } 01:02:47.814 }, 01:02:47.814 { 01:02:47.814 "method": "sock_impl_set_options", 01:02:47.814 "params": { 01:02:47.814 "impl_name": "uring", 01:02:47.814 "recv_buf_size": 2097152, 01:02:47.814 "send_buf_size": 2097152, 01:02:47.814 "enable_recv_pipe": true, 01:02:47.814 "enable_quickack": false, 01:02:47.814 "enable_placement_id": 0, 01:02:47.814 "enable_zerocopy_send_server": false, 01:02:47.814 "enable_zerocopy_send_client": false, 01:02:47.814 "zerocopy_threshold": 0, 01:02:47.814 "tls_version": 0, 01:02:47.814 "enable_ktls": false 01:02:47.814 } 01:02:47.814 } 01:02:47.814 ] 01:02:47.814 }, 01:02:47.814 { 01:02:47.814 "subsystem": "vmd", 01:02:47.814 "config": [] 01:02:47.814 }, 01:02:47.814 { 01:02:47.814 "subsystem": "accel", 01:02:47.814 "config": [ 01:02:47.814 { 01:02:47.814 "method": "accel_set_options", 01:02:47.814 "params": { 01:02:47.814 "small_cache_size": 128, 01:02:47.814 "large_cache_size": 16, 01:02:47.814 "task_count": 2048, 01:02:47.814 "sequence_count": 2048, 01:02:47.814 "buf_count": 2048 01:02:47.814 } 01:02:47.814 } 01:02:47.814 ] 01:02:47.814 }, 01:02:47.814 { 01:02:47.814 "subsystem": "bdev", 01:02:47.814 "config": [ 01:02:47.814 { 01:02:47.814 "method": "bdev_set_options", 01:02:47.814 "params": { 01:02:47.814 "bdev_io_pool_size": 65535, 01:02:47.814 "bdev_io_cache_size": 256, 01:02:47.814 "bdev_auto_examine": true, 01:02:47.814 "iobuf_small_cache_size": 128, 01:02:47.814 "iobuf_large_cache_size": 16 01:02:47.814 } 01:02:47.814 }, 01:02:47.814 { 01:02:47.814 "method": "bdev_raid_set_options", 01:02:47.814 "params": { 01:02:47.814 "process_window_size_kb": 1024, 01:02:47.814 "process_max_bandwidth_mb_sec": 0 01:02:47.814 } 01:02:47.814 }, 01:02:47.814 { 01:02:47.814 "method": "bdev_iscsi_set_options", 01:02:47.814 "params": { 01:02:47.814 "timeout_sec": 30 01:02:47.814 } 01:02:47.814 }, 01:02:47.814 { 01:02:47.814 "method": "bdev_nvme_set_options", 01:02:47.814 "params": { 01:02:47.814 "action_on_timeout": "none", 01:02:47.814 "timeout_us": 0, 01:02:47.814 "timeout_admin_us": 0, 01:02:47.814 "keep_alive_timeout_ms": 10000, 01:02:47.814 "arbitration_burst": 0, 01:02:47.814 "low_priority_weight": 0, 01:02:47.814 "medium_priority_weight": 0, 01:02:47.814 "high_priority_weight": 0, 01:02:47.814 "nvme_adminq_poll_period_us": 10000, 01:02:47.814 "nvme_ioq_poll_period_us": 0, 01:02:47.814 "io_queue_requests": 0, 01:02:47.814 "delay_cmd_submit": true, 01:02:47.814 "transport_retry_count": 4, 01:02:47.814 "bdev_retry_count": 3, 01:02:47.814 "transport_ack_timeout": 0, 01:02:47.814 "ctrlr_loss_timeout_sec": 0, 01:02:47.814 "reconnect_delay_sec": 0, 01:02:47.814 "fast_io_fail_timeout_sec": 0, 01:02:47.814 "disable_auto_failback": false, 01:02:47.814 "generate_uuids": false, 01:02:47.814 "transport_tos": 0, 01:02:47.814 "nvme_error_stat": false, 01:02:47.814 "rdma_srq_size": 0, 01:02:47.814 "io_path_stat": false, 01:02:47.814 "allow_accel_sequence": false, 01:02:47.814 "rdma_max_cq_size": 0, 01:02:47.814 "rdma_cm_event_timeout_ms": 0, 01:02:47.814 "dhchap_digests": [ 01:02:47.814 "sha256", 01:02:47.814 "sha384", 01:02:47.814 "sha512" 01:02:47.814 ], 01:02:47.814 "dhchap_dhgroups": [ 01:02:47.814 "null", 01:02:47.814 "ffdhe2048", 01:02:47.814 "ffdhe3072", 01:02:47.814 "ffdhe4096", 01:02:47.814 "ffdhe6144", 01:02:47.814 "ffdhe8192" 01:02:47.814 ] 01:02:47.814 } 01:02:47.814 }, 01:02:47.814 { 01:02:47.814 "method": "bdev_nvme_set_hotplug", 01:02:47.814 "params": { 01:02:47.814 "period_us": 100000, 01:02:47.814 "enable": false 01:02:47.814 } 01:02:47.814 }, 01:02:47.814 { 01:02:47.814 "method": "bdev_wait_for_examine" 01:02:47.814 } 01:02:47.814 ] 01:02:47.814 }, 01:02:47.814 { 01:02:47.814 "subsystem": "scsi", 01:02:47.814 "config": null 01:02:47.814 }, 01:02:47.814 { 01:02:47.814 "subsystem": "scheduler", 01:02:47.814 "config": [ 01:02:47.814 { 01:02:47.814 "method": "framework_set_scheduler", 01:02:47.814 "params": { 01:02:47.814 "name": "static" 01:02:47.814 } 01:02:47.814 } 01:02:47.814 ] 01:02:47.814 }, 01:02:47.814 { 01:02:47.814 "subsystem": "vhost_scsi", 01:02:47.814 "config": [] 01:02:47.814 }, 01:02:47.814 { 01:02:47.814 "subsystem": "vhost_blk", 01:02:47.814 "config": [] 01:02:47.814 }, 01:02:47.814 { 01:02:47.814 "subsystem": "ublk", 01:02:47.814 "config": [] 01:02:47.814 }, 01:02:47.814 { 01:02:47.814 "subsystem": "nbd", 01:02:47.814 "config": [] 01:02:47.814 }, 01:02:47.814 { 01:02:47.815 "subsystem": "nvmf", 01:02:47.815 "config": [ 01:02:47.815 { 01:02:47.815 "method": "nvmf_set_config", 01:02:47.815 "params": { 01:02:47.815 "discovery_filter": "match_any", 01:02:47.815 "admin_cmd_passthru": { 01:02:47.815 "identify_ctrlr": false 01:02:47.815 } 01:02:47.815 } 01:02:47.815 }, 01:02:47.815 { 01:02:47.815 "method": "nvmf_set_max_subsystems", 01:02:47.815 "params": { 01:02:47.815 "max_subsystems": 1024 01:02:47.815 } 01:02:47.815 }, 01:02:47.815 { 01:02:47.815 "method": "nvmf_set_crdt", 01:02:47.815 "params": { 01:02:47.815 "crdt1": 0, 01:02:47.815 "crdt2": 0, 01:02:47.815 "crdt3": 0 01:02:47.815 } 01:02:47.815 }, 01:02:47.815 { 01:02:47.815 "method": "nvmf_create_transport", 01:02:47.815 "params": { 01:02:47.815 "trtype": "TCP", 01:02:47.815 "max_queue_depth": 128, 01:02:47.815 "max_io_qpairs_per_ctrlr": 127, 01:02:47.815 "in_capsule_data_size": 4096, 01:02:47.815 "max_io_size": 131072, 01:02:47.815 "io_unit_size": 131072, 01:02:47.815 "max_aq_depth": 128, 01:02:47.815 "num_shared_buffers": 511, 01:02:47.815 "buf_cache_size": 4294967295, 01:02:47.815 "dif_insert_or_strip": false, 01:02:47.815 "zcopy": false, 01:02:47.815 "c2h_success": true, 01:02:47.815 "sock_priority": 0, 01:02:47.815 "abort_timeout_sec": 1, 01:02:47.815 "ack_timeout": 0, 01:02:47.815 "data_wr_pool_size": 0 01:02:47.815 } 01:02:47.815 } 01:02:47.815 ] 01:02:47.815 }, 01:02:47.815 { 01:02:47.815 "subsystem": "iscsi", 01:02:47.815 "config": [ 01:02:47.815 { 01:02:47.815 "method": "iscsi_set_options", 01:02:47.815 "params": { 01:02:47.815 "node_base": "iqn.2016-06.io.spdk", 01:02:47.815 "max_sessions": 128, 01:02:47.815 "max_connections_per_session": 2, 01:02:47.815 "max_queue_depth": 64, 01:02:47.815 "default_time2wait": 2, 01:02:47.815 "default_time2retain": 20, 01:02:47.815 "first_burst_length": 8192, 01:02:47.815 "immediate_data": true, 01:02:47.815 "allow_duplicated_isid": false, 01:02:47.815 "error_recovery_level": 0, 01:02:47.815 "nop_timeout": 60, 01:02:47.815 "nop_in_interval": 30, 01:02:47.815 "disable_chap": false, 01:02:47.815 "require_chap": false, 01:02:47.815 "mutual_chap": false, 01:02:47.815 "chap_group": 0, 01:02:47.815 "max_large_datain_per_connection": 64, 01:02:47.815 "max_r2t_per_connection": 4, 01:02:47.815 "pdu_pool_size": 36864, 01:02:47.815 "immediate_data_pool_size": 16384, 01:02:47.815 "data_out_pool_size": 2048 01:02:47.815 } 01:02:47.815 } 01:02:47.815 ] 01:02:47.815 } 01:02:47.815 ] 01:02:47.815 } 01:02:47.815 10:59:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 01:02:47.815 10:59:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 71553 01:02:47.815 10:59:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 71553 ']' 01:02:47.815 10:59:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 71553 01:02:47.815 10:59:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 01:02:47.815 10:59:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:02:47.815 10:59:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71553 01:02:47.815 killing process with pid 71553 01:02:47.815 10:59:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:02:47.815 10:59:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:02:47.815 10:59:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71553' 01:02:47.815 10:59:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 71553 01:02:47.815 10:59:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 71553 01:02:48.405 10:59:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=71580 01:02:48.405 10:59:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 01:02:48.405 10:59:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 71580 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 71580 ']' 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 71580 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71580 01:02:53.667 killing process with pid 71580 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71580' 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 71580 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 71580 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 01:02:53.667 01:02:53.667 real 0m6.840s 01:02:53.667 user 0m6.499s 01:02:53.667 sys 0m0.624s 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:53.667 ************************************ 01:02:53.667 END TEST skip_rpc_with_json 01:02:53.667 ************************************ 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:02:53.667 10:59:58 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 01:02:53.667 10:59:58 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 01:02:53.667 10:59:58 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:53.667 10:59:58 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:53.667 10:59:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:02:53.667 ************************************ 01:02:53.667 START TEST skip_rpc_with_delay 01:02:53.667 ************************************ 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 01:02:53.667 [2024-07-22 10:59:58.809221] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 01:02:53.667 [2024-07-22 10:59:58.809340] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:02:53.667 ************************************ 01:02:53.667 END TEST skip_rpc_with_delay 01:02:53.667 ************************************ 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:02:53.667 01:02:53.667 real 0m0.082s 01:02:53.667 user 0m0.035s 01:02:53.667 sys 0m0.045s 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:53.667 10:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 01:02:53.926 10:59:58 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 01:02:53.926 10:59:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 01:02:53.926 10:59:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 01:02:53.926 10:59:58 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 01:02:53.926 10:59:58 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:53.926 10:59:58 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:53.926 10:59:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:02:53.926 ************************************ 01:02:53.926 START TEST exit_on_failed_rpc_init 01:02:53.926 ************************************ 01:02:53.926 10:59:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 01:02:53.926 10:59:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=71684 01:02:53.926 10:59:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:02:53.926 10:59:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 71684 01:02:53.926 10:59:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 71684 ']' 01:02:53.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:02:53.926 10:59:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:02:53.926 10:59:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 01:02:53.926 10:59:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:02:53.926 10:59:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 01:02:53.926 10:59:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 01:02:53.926 [2024-07-22 10:59:58.960294] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:02:53.926 [2024-07-22 10:59:58.960362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71684 ] 01:02:53.926 [2024-07-22 10:59:59.089101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:54.184 [2024-07-22 10:59:59.146828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:02:54.184 [2024-07-22 10:59:59.188759] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:02:54.750 10:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:02:54.750 10:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 01:02:54.750 10:59:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 01:02:54.750 10:59:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 01:02:54.750 10:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 01:02:54.750 10:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 01:02:54.750 10:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:02:54.750 10:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:02:54.750 10:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:02:54.750 10:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:02:54.750 10:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:02:54.750 10:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:02:54.750 10:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:02:54.750 10:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 01:02:54.750 10:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 01:02:54.750 [2024-07-22 10:59:59.880616] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:02:54.750 [2024-07-22 10:59:59.880707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71702 ] 01:02:55.007 [2024-07-22 11:00:00.027920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:55.007 [2024-07-22 11:00:00.080137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:02:55.007 [2024-07-22 11:00:00.080416] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 01:02:55.007 [2024-07-22 11:00:00.080558] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 01:02:55.007 [2024-07-22 11:00:00.080589] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:02:55.007 11:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 01:02:55.007 11:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:02:55.007 11:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 01:02:55.007 11:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 01:02:55.007 11:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 01:02:55.007 11:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:02:55.007 11:00:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 01:02:55.007 11:00:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 71684 01:02:55.007 11:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 71684 ']' 01:02:55.007 11:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 71684 01:02:55.007 11:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 01:02:55.007 11:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:02:55.007 11:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71684 01:02:55.007 killing process with pid 71684 01:02:55.007 11:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:02:55.007 11:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:02:55.007 11:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71684' 01:02:55.007 11:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 71684 01:02:55.007 11:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 71684 01:02:55.575 01:02:55.575 real 0m1.614s 01:02:55.575 user 0m1.799s 01:02:55.575 sys 0m0.382s 01:02:55.575 ************************************ 01:02:55.575 END TEST exit_on_failed_rpc_init 01:02:55.575 ************************************ 01:02:55.575 11:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:55.575 11:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 01:02:55.575 11:00:00 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 01:02:55.575 11:00:00 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 01:02:55.576 01:02:55.576 real 0m14.326s 01:02:55.576 user 0m13.523s 01:02:55.576 sys 0m1.577s 01:02:55.576 ************************************ 01:02:55.576 END TEST skip_rpc 01:02:55.576 ************************************ 01:02:55.576 11:00:00 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:55.576 11:00:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:02:55.576 11:00:00 -- common/autotest_common.sh@1142 -- # return 0 01:02:55.576 11:00:00 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 01:02:55.576 11:00:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:55.576 11:00:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:55.576 11:00:00 -- common/autotest_common.sh@10 -- # set +x 01:02:55.576 ************************************ 01:02:55.576 START TEST rpc_client 01:02:55.576 ************************************ 01:02:55.576 11:00:00 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 01:02:55.576 * Looking for test storage... 01:02:55.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 01:02:55.576 11:00:00 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 01:02:55.848 OK 01:02:55.848 11:00:00 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 01:02:55.848 01:02:55.848 real 0m0.160s 01:02:55.848 user 0m0.072s 01:02:55.848 sys 0m0.097s 01:02:55.848 11:00:00 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:55.848 11:00:00 rpc_client -- common/autotest_common.sh@10 -- # set +x 01:02:55.848 ************************************ 01:02:55.848 END TEST rpc_client 01:02:55.848 ************************************ 01:02:55.848 11:00:00 -- common/autotest_common.sh@1142 -- # return 0 01:02:55.848 11:00:00 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 01:02:55.848 11:00:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:55.848 11:00:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:55.848 11:00:00 -- common/autotest_common.sh@10 -- # set +x 01:02:55.848 ************************************ 01:02:55.849 START TEST json_config 01:02:55.849 ************************************ 01:02:55.849 11:00:00 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 01:02:55.849 11:00:00 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:02:55.849 11:00:00 json_config -- nvmf/common.sh@7 -- # uname -s 01:02:55.849 11:00:00 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:02:55.849 11:00:00 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:02:55.849 11:00:00 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:02:55.849 11:00:00 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:02:55.849 11:00:00 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:02:55.849 11:00:00 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:02:55.849 11:00:00 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:02:55.849 11:00:00 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:02:55.849 11:00:00 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:02:55.849 11:00:00 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:02:55.849 11:00:01 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:02:55.849 11:00:01 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:02:55.849 11:00:01 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:02:55.849 11:00:01 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:02:55.849 11:00:01 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 01:02:55.849 11:00:01 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:02:55.849 11:00:01 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:02:55.849 11:00:01 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:02:55.849 11:00:01 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:02:55.849 11:00:01 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:02:55.849 11:00:01 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:55.849 11:00:01 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:55.849 11:00:01 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:55.849 11:00:01 json_config -- paths/export.sh@5 -- # export PATH 01:02:55.849 11:00:01 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:55.849 11:00:01 json_config -- nvmf/common.sh@47 -- # : 0 01:02:55.849 11:00:01 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:02:55.849 11:00:01 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:02:55.849 11:00:01 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:02:55.849 11:00:01 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:02:55.849 11:00:01 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:02:55.849 11:00:01 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:02:55.849 11:00:01 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:02:55.849 11:00:01 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 01:02:55.849 11:00:01 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 01:02:55.849 11:00:01 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 01:02:55.849 11:00:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 01:02:55.849 11:00:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 01:02:55.849 11:00:01 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 01:02:55.849 11:00:01 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 01:02:55.849 11:00:01 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 01:02:55.849 11:00:01 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 01:02:55.849 11:00:01 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 01:02:55.849 11:00:01 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 01:02:55.849 11:00:01 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 01:02:55.849 11:00:01 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 01:02:55.849 11:00:01 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 01:02:55.849 11:00:01 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 01:02:55.849 11:00:01 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 01:02:55.849 INFO: JSON configuration test init 01:02:55.849 11:00:01 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 01:02:55.849 11:00:01 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 01:02:55.849 11:00:01 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 01:02:55.849 11:00:01 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 01:02:55.849 11:00:01 json_config -- common/autotest_common.sh@10 -- # set +x 01:02:55.849 11:00:01 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 01:02:55.849 11:00:01 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 01:02:55.849 11:00:01 json_config -- common/autotest_common.sh@10 -- # set +x 01:02:55.849 Waiting for target to run... 01:02:55.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 01:02:55.849 11:00:01 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 01:02:55.849 11:00:01 json_config -- json_config/common.sh@9 -- # local app=target 01:02:55.849 11:00:01 json_config -- json_config/common.sh@10 -- # shift 01:02:55.849 11:00:01 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 01:02:55.849 11:00:01 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 01:02:55.849 11:00:01 json_config -- json_config/common.sh@15 -- # local app_extra_params= 01:02:55.849 11:00:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 01:02:55.849 11:00:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 01:02:55.849 11:00:01 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=71826 01:02:55.849 11:00:01 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 01:02:55.849 11:00:01 json_config -- json_config/common.sh@25 -- # waitforlisten 71826 /var/tmp/spdk_tgt.sock 01:02:55.849 11:00:01 json_config -- common/autotest_common.sh@829 -- # '[' -z 71826 ']' 01:02:55.849 11:00:01 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 01:02:55.849 11:00:01 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 01:02:55.849 11:00:01 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 01:02:55.849 11:00:01 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 01:02:55.849 11:00:01 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 01:02:55.849 11:00:01 json_config -- common/autotest_common.sh@10 -- # set +x 01:02:56.107 [2024-07-22 11:00:01.087632] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:02:56.107 [2024-07-22 11:00:01.088640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71826 ] 01:02:56.364 [2024-07-22 11:00:01.442620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:56.364 [2024-07-22 11:00:01.471993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:02:56.926 11:00:01 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:02:56.926 11:00:01 json_config -- common/autotest_common.sh@862 -- # return 0 01:02:56.926 11:00:01 json_config -- json_config/common.sh@26 -- # echo '' 01:02:56.926 01:02:56.926 11:00:01 json_config -- json_config/json_config.sh@273 -- # create_accel_config 01:02:56.926 11:00:01 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 01:02:56.926 11:00:01 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 01:02:56.926 11:00:01 json_config -- common/autotest_common.sh@10 -- # set +x 01:02:56.926 11:00:01 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 01:02:56.926 11:00:01 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 01:02:56.926 11:00:01 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 01:02:56.926 11:00:01 json_config -- common/autotest_common.sh@10 -- # set +x 01:02:56.926 11:00:01 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 01:02:56.926 11:00:02 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 01:02:56.926 11:00:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 01:02:57.183 [2024-07-22 11:00:02.238641] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:02:57.441 11:00:02 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 01:02:57.441 11:00:02 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 01:02:57.441 11:00:02 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 01:02:57.441 11:00:02 json_config -- common/autotest_common.sh@10 -- # set +x 01:02:57.441 11:00:02 json_config -- json_config/json_config.sh@45 -- # local ret=0 01:02:57.441 11:00:02 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 01:02:57.441 11:00:02 json_config -- json_config/json_config.sh@46 -- # local enabled_types 01:02:57.441 11:00:02 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 01:02:57.441 11:00:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 01:02:57.441 11:00:02 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 01:02:57.441 11:00:02 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 01:02:57.441 11:00:02 json_config -- json_config/json_config.sh@48 -- # local get_types 01:02:57.441 11:00:02 json_config -- json_config/json_config.sh@50 -- # local type_diff 01:02:57.441 11:00:02 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 01:02:57.441 11:00:02 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 01:02:57.441 11:00:02 json_config -- json_config/json_config.sh@51 -- # sort 01:02:57.441 11:00:02 json_config -- json_config/json_config.sh@51 -- # uniq -u 01:02:57.700 11:00:02 json_config -- json_config/json_config.sh@51 -- # type_diff= 01:02:57.700 11:00:02 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 01:02:57.700 11:00:02 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 01:02:57.700 11:00:02 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 01:02:57.700 11:00:02 json_config -- common/autotest_common.sh@10 -- # set +x 01:02:57.700 11:00:02 json_config -- json_config/json_config.sh@59 -- # return 0 01:02:57.700 11:00:02 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 01:02:57.700 11:00:02 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 01:02:57.700 11:00:02 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 01:02:57.700 11:00:02 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 01:02:57.700 11:00:02 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 01:02:57.700 11:00:02 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 01:02:57.700 11:00:02 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 01:02:57.700 11:00:02 json_config -- common/autotest_common.sh@10 -- # set +x 01:02:57.700 11:00:02 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 01:02:57.700 11:00:02 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 01:02:57.700 11:00:02 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 01:02:57.700 11:00:02 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 01:02:57.700 11:00:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 01:02:57.956 MallocForNvmf0 01:02:57.956 11:00:02 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 01:02:57.956 11:00:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 01:02:57.956 MallocForNvmf1 01:02:57.956 11:00:03 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 01:02:57.956 11:00:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 01:02:58.214 [2024-07-22 11:00:03.315931] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:02:58.214 11:00:03 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:02:58.214 11:00:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:02:58.472 11:00:03 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 01:02:58.473 11:00:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 01:02:58.730 11:00:03 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 01:02:58.730 11:00:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 01:02:58.988 11:00:03 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 01:02:58.988 11:00:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 01:02:58.988 [2024-07-22 11:00:04.144060] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:02:58.988 11:00:04 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 01:02:58.988 11:00:04 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 01:02:58.988 11:00:04 json_config -- common/autotest_common.sh@10 -- # set +x 01:02:59.245 11:00:04 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 01:02:59.245 11:00:04 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 01:02:59.245 11:00:04 json_config -- common/autotest_common.sh@10 -- # set +x 01:02:59.245 11:00:04 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 01:02:59.245 11:00:04 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 01:02:59.245 11:00:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 01:02:59.503 MallocBdevForConfigChangeCheck 01:02:59.503 11:00:04 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 01:02:59.503 11:00:04 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 01:02:59.503 11:00:04 json_config -- common/autotest_common.sh@10 -- # set +x 01:02:59.503 11:00:04 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 01:02:59.503 11:00:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 01:02:59.760 INFO: shutting down applications... 01:02:59.760 11:00:04 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 01:02:59.760 11:00:04 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 01:02:59.760 11:00:04 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 01:02:59.760 11:00:04 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 01:02:59.760 11:00:04 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 01:03:00.029 Calling clear_iscsi_subsystem 01:03:00.029 Calling clear_nvmf_subsystem 01:03:00.029 Calling clear_nbd_subsystem 01:03:00.029 Calling clear_ublk_subsystem 01:03:00.029 Calling clear_vhost_blk_subsystem 01:03:00.029 Calling clear_vhost_scsi_subsystem 01:03:00.029 Calling clear_bdev_subsystem 01:03:00.029 11:00:05 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 01:03:00.029 11:00:05 json_config -- json_config/json_config.sh@347 -- # count=100 01:03:00.029 11:00:05 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 01:03:00.029 11:00:05 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 01:03:00.029 11:00:05 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 01:03:00.029 11:00:05 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 01:03:00.606 11:00:05 json_config -- json_config/json_config.sh@349 -- # break 01:03:00.606 11:00:05 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 01:03:00.606 11:00:05 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 01:03:00.606 11:00:05 json_config -- json_config/common.sh@31 -- # local app=target 01:03:00.606 11:00:05 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 01:03:00.606 11:00:05 json_config -- json_config/common.sh@35 -- # [[ -n 71826 ]] 01:03:00.606 11:00:05 json_config -- json_config/common.sh@38 -- # kill -SIGINT 71826 01:03:00.606 11:00:05 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 01:03:00.606 11:00:05 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 01:03:00.606 11:00:05 json_config -- json_config/common.sh@41 -- # kill -0 71826 01:03:00.606 11:00:05 json_config -- json_config/common.sh@45 -- # sleep 0.5 01:03:00.865 11:00:06 json_config -- json_config/common.sh@40 -- # (( i++ )) 01:03:00.865 11:00:06 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 01:03:00.865 11:00:06 json_config -- json_config/common.sh@41 -- # kill -0 71826 01:03:00.865 11:00:06 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 01:03:00.865 11:00:06 json_config -- json_config/common.sh@43 -- # break 01:03:00.865 SPDK target shutdown done 01:03:00.865 INFO: relaunching applications... 01:03:00.865 11:00:06 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 01:03:00.865 11:00:06 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 01:03:00.865 11:00:06 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 01:03:00.865 11:00:06 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 01:03:00.865 11:00:06 json_config -- json_config/common.sh@9 -- # local app=target 01:03:00.865 11:00:06 json_config -- json_config/common.sh@10 -- # shift 01:03:00.865 11:00:06 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 01:03:00.865 11:00:06 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 01:03:00.865 11:00:06 json_config -- json_config/common.sh@15 -- # local app_extra_params= 01:03:00.865 11:00:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 01:03:00.865 11:00:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 01:03:00.865 11:00:06 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=72010 01:03:00.865 Waiting for target to run... 01:03:00.865 11:00:06 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 01:03:00.865 11:00:06 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 01:03:00.865 11:00:06 json_config -- json_config/common.sh@25 -- # waitforlisten 72010 /var/tmp/spdk_tgt.sock 01:03:00.865 11:00:06 json_config -- common/autotest_common.sh@829 -- # '[' -z 72010 ']' 01:03:00.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 01:03:00.865 11:00:06 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 01:03:00.865 11:00:06 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 01:03:00.865 11:00:06 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 01:03:00.865 11:00:06 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 01:03:00.865 11:00:06 json_config -- common/autotest_common.sh@10 -- # set +x 01:03:01.123 [2024-07-22 11:00:06.091560] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:03:01.124 [2024-07-22 11:00:06.091650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72010 ] 01:03:01.381 [2024-07-22 11:00:06.452666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:01.381 [2024-07-22 11:00:06.480148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:03:01.638 [2024-07-22 11:00:06.605058] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:03:01.638 [2024-07-22 11:00:06.793269] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:03:01.638 [2024-07-22 11:00:06.825281] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:03:01.932 01:03:01.932 INFO: Checking if target configuration is the same... 01:03:01.932 11:00:06 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:03:01.932 11:00:06 json_config -- common/autotest_common.sh@862 -- # return 0 01:03:01.932 11:00:06 json_config -- json_config/common.sh@26 -- # echo '' 01:03:01.932 11:00:06 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 01:03:01.932 11:00:06 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 01:03:01.932 11:00:06 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 01:03:01.932 11:00:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 01:03:01.932 11:00:06 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 01:03:01.932 + '[' 2 -ne 2 ']' 01:03:01.932 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 01:03:01.932 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 01:03:01.932 + rootdir=/home/vagrant/spdk_repo/spdk 01:03:01.932 +++ basename /dev/fd/62 01:03:01.932 ++ mktemp /tmp/62.XXX 01:03:01.933 + tmp_file_1=/tmp/62.ipW 01:03:01.933 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 01:03:01.933 ++ mktemp /tmp/spdk_tgt_config.json.XXX 01:03:01.933 + tmp_file_2=/tmp/spdk_tgt_config.json.hAS 01:03:01.933 + ret=0 01:03:01.933 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 01:03:02.190 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 01:03:02.190 + diff -u /tmp/62.ipW /tmp/spdk_tgt_config.json.hAS 01:03:02.190 + echo 'INFO: JSON config files are the same' 01:03:02.190 INFO: JSON config files are the same 01:03:02.190 + rm /tmp/62.ipW /tmp/spdk_tgt_config.json.hAS 01:03:02.190 + exit 0 01:03:02.190 INFO: changing configuration and checking if this can be detected... 01:03:02.190 11:00:07 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 01:03:02.190 11:00:07 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 01:03:02.190 11:00:07 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 01:03:02.190 11:00:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 01:03:02.448 11:00:07 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 01:03:02.448 11:00:07 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 01:03:02.448 11:00:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 01:03:02.448 + '[' 2 -ne 2 ']' 01:03:02.448 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 01:03:02.448 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 01:03:02.448 + rootdir=/home/vagrant/spdk_repo/spdk 01:03:02.448 +++ basename /dev/fd/62 01:03:02.448 ++ mktemp /tmp/62.XXX 01:03:02.448 + tmp_file_1=/tmp/62.KDI 01:03:02.448 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 01:03:02.448 ++ mktemp /tmp/spdk_tgt_config.json.XXX 01:03:02.448 + tmp_file_2=/tmp/spdk_tgt_config.json.wgK 01:03:02.448 + ret=0 01:03:02.448 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 01:03:02.705 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 01:03:02.963 + diff -u /tmp/62.KDI /tmp/spdk_tgt_config.json.wgK 01:03:02.963 + ret=1 01:03:02.963 + echo '=== Start of file: /tmp/62.KDI ===' 01:03:02.963 + cat /tmp/62.KDI 01:03:02.963 + echo '=== End of file: /tmp/62.KDI ===' 01:03:02.963 + echo '' 01:03:02.963 + echo '=== Start of file: /tmp/spdk_tgt_config.json.wgK ===' 01:03:02.963 + cat /tmp/spdk_tgt_config.json.wgK 01:03:02.963 + echo '=== End of file: /tmp/spdk_tgt_config.json.wgK ===' 01:03:02.963 + echo '' 01:03:02.963 + rm /tmp/62.KDI /tmp/spdk_tgt_config.json.wgK 01:03:02.963 + exit 1 01:03:02.963 INFO: configuration change detected. 01:03:02.963 11:00:07 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 01:03:02.963 11:00:07 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 01:03:02.963 11:00:07 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 01:03:02.963 11:00:07 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 01:03:02.963 11:00:07 json_config -- common/autotest_common.sh@10 -- # set +x 01:03:02.963 11:00:07 json_config -- json_config/json_config.sh@311 -- # local ret=0 01:03:02.963 11:00:07 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 01:03:02.963 11:00:07 json_config -- json_config/json_config.sh@321 -- # [[ -n 72010 ]] 01:03:02.963 11:00:07 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 01:03:02.963 11:00:07 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 01:03:02.963 11:00:07 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 01:03:02.963 11:00:07 json_config -- common/autotest_common.sh@10 -- # set +x 01:03:02.963 11:00:07 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 01:03:02.963 11:00:07 json_config -- json_config/json_config.sh@197 -- # uname -s 01:03:02.963 11:00:07 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 01:03:02.963 11:00:07 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 01:03:02.963 11:00:07 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 01:03:02.963 11:00:07 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 01:03:02.963 11:00:07 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 01:03:02.963 11:00:07 json_config -- common/autotest_common.sh@10 -- # set +x 01:03:02.963 11:00:08 json_config -- json_config/json_config.sh@327 -- # killprocess 72010 01:03:02.963 11:00:08 json_config -- common/autotest_common.sh@948 -- # '[' -z 72010 ']' 01:03:02.963 11:00:08 json_config -- common/autotest_common.sh@952 -- # kill -0 72010 01:03:02.963 11:00:08 json_config -- common/autotest_common.sh@953 -- # uname 01:03:02.963 11:00:08 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:03:02.963 11:00:08 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72010 01:03:02.963 killing process with pid 72010 01:03:02.963 11:00:08 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:03:02.963 11:00:08 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:03:02.963 11:00:08 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72010' 01:03:02.963 11:00:08 json_config -- common/autotest_common.sh@967 -- # kill 72010 01:03:02.963 11:00:08 json_config -- common/autotest_common.sh@972 -- # wait 72010 01:03:03.221 11:00:08 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 01:03:03.221 11:00:08 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 01:03:03.221 11:00:08 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 01:03:03.221 11:00:08 json_config -- common/autotest_common.sh@10 -- # set +x 01:03:03.221 INFO: Success 01:03:03.221 11:00:08 json_config -- json_config/json_config.sh@332 -- # return 0 01:03:03.221 11:00:08 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 01:03:03.221 01:03:03.221 real 0m7.416s 01:03:03.221 user 0m10.058s 01:03:03.221 sys 0m1.795s 01:03:03.221 11:00:08 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 01:03:03.221 ************************************ 01:03:03.221 END TEST json_config 01:03:03.221 ************************************ 01:03:03.221 11:00:08 json_config -- common/autotest_common.sh@10 -- # set +x 01:03:03.221 11:00:08 -- common/autotest_common.sh@1142 -- # return 0 01:03:03.221 11:00:08 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 01:03:03.221 11:00:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:03:03.221 11:00:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:03:03.221 11:00:08 -- common/autotest_common.sh@10 -- # set +x 01:03:03.221 ************************************ 01:03:03.221 START TEST json_config_extra_key 01:03:03.221 ************************************ 01:03:03.221 11:00:08 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 01:03:03.480 11:00:08 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:03:03.480 11:00:08 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 01:03:03.480 11:00:08 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:03:03.480 11:00:08 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:03:03.480 11:00:08 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:03:03.480 11:00:08 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:03:03.480 11:00:08 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:03:03.480 11:00:08 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:03:03.480 11:00:08 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:03:03.480 11:00:08 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:03:03.480 11:00:08 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:03:03.480 11:00:08 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:03:03.480 11:00:08 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:03:03.480 11:00:08 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:03:03.480 11:00:08 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:03:03.480 11:00:08 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:03:03.480 11:00:08 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 01:03:03.480 11:00:08 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:03:03.480 11:00:08 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:03:03.480 11:00:08 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:03:03.480 11:00:08 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:03:03.480 11:00:08 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:03:03.481 11:00:08 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:03.481 11:00:08 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:03.481 11:00:08 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:03.481 11:00:08 json_config_extra_key -- paths/export.sh@5 -- # export PATH 01:03:03.481 11:00:08 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:03.481 11:00:08 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 01:03:03.481 11:00:08 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:03:03.481 11:00:08 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:03:03.481 11:00:08 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:03:03.481 11:00:08 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:03:03.481 11:00:08 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:03:03.481 11:00:08 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:03:03.481 11:00:08 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:03:03.481 11:00:08 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 01:03:03.481 11:00:08 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 01:03:03.481 11:00:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 01:03:03.481 11:00:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 01:03:03.481 11:00:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 01:03:03.481 11:00:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 01:03:03.481 11:00:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 01:03:03.481 11:00:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 01:03:03.481 11:00:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 01:03:03.481 11:00:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 01:03:03.481 11:00:08 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 01:03:03.481 11:00:08 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 01:03:03.481 INFO: launching applications... 01:03:03.481 11:00:08 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 01:03:03.481 11:00:08 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 01:03:03.481 11:00:08 json_config_extra_key -- json_config/common.sh@10 -- # shift 01:03:03.481 11:00:08 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 01:03:03.481 11:00:08 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 01:03:03.481 11:00:08 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 01:03:03.481 11:00:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 01:03:03.481 11:00:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 01:03:03.481 Waiting for target to run... 01:03:03.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 01:03:03.481 11:00:08 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=72151 01:03:03.481 11:00:08 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 01:03:03.481 11:00:08 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 72151 /var/tmp/spdk_tgt.sock 01:03:03.481 11:00:08 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 72151 ']' 01:03:03.481 11:00:08 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 01:03:03.481 11:00:08 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 01:03:03.481 11:00:08 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 01:03:03.481 11:00:08 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 01:03:03.481 11:00:08 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 01:03:03.481 11:00:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 01:03:03.481 [2024-07-22 11:00:08.558662] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:03:03.481 [2024-07-22 11:00:08.558738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72151 ] 01:03:03.739 [2024-07-22 11:00:08.910533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:03.739 [2024-07-22 11:00:08.938600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:03:03.997 [2024-07-22 11:00:08.958765] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:03:04.256 11:00:09 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:03:04.256 01:03:04.256 INFO: shutting down applications... 01:03:04.256 11:00:09 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 01:03:04.256 11:00:09 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 01:03:04.256 11:00:09 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 01:03:04.256 11:00:09 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 01:03:04.256 11:00:09 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 01:03:04.256 11:00:09 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 01:03:04.256 11:00:09 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 72151 ]] 01:03:04.256 11:00:09 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 72151 01:03:04.256 11:00:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 01:03:04.256 11:00:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 01:03:04.256 11:00:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 72151 01:03:04.256 11:00:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 01:03:04.824 11:00:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 01:03:04.824 11:00:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 01:03:04.824 11:00:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 72151 01:03:04.824 11:00:09 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 01:03:04.824 SPDK target shutdown done 01:03:04.824 Success 01:03:04.824 11:00:09 json_config_extra_key -- json_config/common.sh@43 -- # break 01:03:04.824 11:00:09 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 01:03:04.824 11:00:09 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 01:03:04.824 11:00:09 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 01:03:04.824 ************************************ 01:03:04.824 END TEST json_config_extra_key 01:03:04.824 ************************************ 01:03:04.824 01:03:04.824 real 0m1.555s 01:03:04.824 user 0m1.273s 01:03:04.824 sys 0m0.395s 01:03:04.824 11:00:09 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 01:03:04.824 11:00:09 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 01:03:04.824 11:00:09 -- common/autotest_common.sh@1142 -- # return 0 01:03:04.824 11:00:09 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 01:03:04.824 11:00:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:03:04.824 11:00:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:03:04.824 11:00:09 -- common/autotest_common.sh@10 -- # set +x 01:03:04.824 ************************************ 01:03:04.824 START TEST alias_rpc 01:03:04.824 ************************************ 01:03:04.824 11:00:09 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 01:03:05.082 * Looking for test storage... 01:03:05.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 01:03:05.082 11:00:10 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 01:03:05.082 11:00:10 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=72215 01:03:05.082 11:00:10 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:03:05.082 11:00:10 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 72215 01:03:05.082 11:00:10 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 72215 ']' 01:03:05.082 11:00:10 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:05.082 11:00:10 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 01:03:05.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:05.082 11:00:10 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:05.082 11:00:10 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 01:03:05.082 11:00:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 01:03:05.082 [2024-07-22 11:00:10.181924] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:03:05.082 [2024-07-22 11:00:10.182001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72215 ] 01:03:05.339 [2024-07-22 11:00:10.312566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:05.339 [2024-07-22 11:00:10.362592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:03:05.339 [2024-07-22 11:00:10.404502] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:03:05.905 11:00:11 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:03:05.905 11:00:11 alias_rpc -- common/autotest_common.sh@862 -- # return 0 01:03:05.905 11:00:11 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 01:03:06.164 11:00:11 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 72215 01:03:06.164 11:00:11 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 72215 ']' 01:03:06.164 11:00:11 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 72215 01:03:06.164 11:00:11 alias_rpc -- common/autotest_common.sh@953 -- # uname 01:03:06.164 11:00:11 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:03:06.164 11:00:11 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72215 01:03:06.164 killing process with pid 72215 01:03:06.164 11:00:11 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:03:06.164 11:00:11 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:03:06.164 11:00:11 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72215' 01:03:06.164 11:00:11 alias_rpc -- common/autotest_common.sh@967 -- # kill 72215 01:03:06.164 11:00:11 alias_rpc -- common/autotest_common.sh@972 -- # wait 72215 01:03:06.731 ************************************ 01:03:06.731 END TEST alias_rpc 01:03:06.731 ************************************ 01:03:06.731 01:03:06.731 real 0m1.677s 01:03:06.731 user 0m1.825s 01:03:06.731 sys 0m0.420s 01:03:06.731 11:00:11 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 01:03:06.731 11:00:11 alias_rpc -- common/autotest_common.sh@10 -- # set +x 01:03:06.731 11:00:11 -- common/autotest_common.sh@1142 -- # return 0 01:03:06.731 11:00:11 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 01:03:06.731 11:00:11 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 01:03:06.731 11:00:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:03:06.731 11:00:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:03:06.731 11:00:11 -- common/autotest_common.sh@10 -- # set +x 01:03:06.731 ************************************ 01:03:06.731 START TEST spdkcli_tcp 01:03:06.731 ************************************ 01:03:06.731 11:00:11 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 01:03:06.731 * Looking for test storage... 01:03:06.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 01:03:06.731 11:00:11 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 01:03:06.731 11:00:11 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 01:03:06.731 11:00:11 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 01:03:06.731 11:00:11 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 01:03:06.731 11:00:11 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 01:03:06.731 11:00:11 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 01:03:06.731 11:00:11 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 01:03:06.731 11:00:11 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 01:03:06.731 11:00:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 01:03:06.731 11:00:11 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=72286 01:03:06.731 11:00:11 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 01:03:06.731 11:00:11 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 72286 01:03:06.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:06.731 11:00:11 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 72286 ']' 01:03:06.731 11:00:11 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:06.731 11:00:11 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 01:03:06.731 11:00:11 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:06.731 11:00:11 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 01:03:06.731 11:00:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 01:03:06.989 [2024-07-22 11:00:11.940468] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:03:06.989 [2024-07-22 11:00:11.940557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72286 ] 01:03:06.989 [2024-07-22 11:00:12.083077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 01:03:06.989 [2024-07-22 11:00:12.161373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:03:06.989 [2024-07-22 11:00:12.161378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:03:07.245 [2024-07-22 11:00:12.235673] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:03:07.811 11:00:12 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:03:07.811 11:00:12 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 01:03:07.811 11:00:12 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 01:03:07.811 11:00:12 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=72303 01:03:07.811 11:00:12 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 01:03:07.811 [ 01:03:07.811 "bdev_malloc_delete", 01:03:07.811 "bdev_malloc_create", 01:03:07.811 "bdev_null_resize", 01:03:07.811 "bdev_null_delete", 01:03:07.811 "bdev_null_create", 01:03:07.811 "bdev_nvme_cuse_unregister", 01:03:07.811 "bdev_nvme_cuse_register", 01:03:07.811 "bdev_opal_new_user", 01:03:07.811 "bdev_opal_set_lock_state", 01:03:07.811 "bdev_opal_delete", 01:03:07.811 "bdev_opal_get_info", 01:03:07.811 "bdev_opal_create", 01:03:07.811 "bdev_nvme_opal_revert", 01:03:07.811 "bdev_nvme_opal_init", 01:03:07.811 "bdev_nvme_send_cmd", 01:03:07.811 "bdev_nvme_get_path_iostat", 01:03:07.811 "bdev_nvme_get_mdns_discovery_info", 01:03:07.811 "bdev_nvme_stop_mdns_discovery", 01:03:07.811 "bdev_nvme_start_mdns_discovery", 01:03:07.811 "bdev_nvme_set_multipath_policy", 01:03:07.811 "bdev_nvme_set_preferred_path", 01:03:07.811 "bdev_nvme_get_io_paths", 01:03:07.811 "bdev_nvme_remove_error_injection", 01:03:07.811 "bdev_nvme_add_error_injection", 01:03:07.811 "bdev_nvme_get_discovery_info", 01:03:07.811 "bdev_nvme_stop_discovery", 01:03:07.811 "bdev_nvme_start_discovery", 01:03:07.811 "bdev_nvme_get_controller_health_info", 01:03:07.811 "bdev_nvme_disable_controller", 01:03:07.811 "bdev_nvme_enable_controller", 01:03:07.811 "bdev_nvme_reset_controller", 01:03:07.811 "bdev_nvme_get_transport_statistics", 01:03:07.811 "bdev_nvme_apply_firmware", 01:03:07.811 "bdev_nvme_detach_controller", 01:03:07.811 "bdev_nvme_get_controllers", 01:03:07.811 "bdev_nvme_attach_controller", 01:03:07.811 "bdev_nvme_set_hotplug", 01:03:07.811 "bdev_nvme_set_options", 01:03:07.811 "bdev_passthru_delete", 01:03:07.811 "bdev_passthru_create", 01:03:07.811 "bdev_lvol_set_parent_bdev", 01:03:07.811 "bdev_lvol_set_parent", 01:03:07.811 "bdev_lvol_check_shallow_copy", 01:03:07.811 "bdev_lvol_start_shallow_copy", 01:03:07.811 "bdev_lvol_grow_lvstore", 01:03:07.811 "bdev_lvol_get_lvols", 01:03:07.811 "bdev_lvol_get_lvstores", 01:03:07.811 "bdev_lvol_delete", 01:03:07.811 "bdev_lvol_set_read_only", 01:03:07.811 "bdev_lvol_resize", 01:03:07.811 "bdev_lvol_decouple_parent", 01:03:07.811 "bdev_lvol_inflate", 01:03:07.811 "bdev_lvol_rename", 01:03:07.811 "bdev_lvol_clone_bdev", 01:03:07.811 "bdev_lvol_clone", 01:03:07.811 "bdev_lvol_snapshot", 01:03:07.811 "bdev_lvol_create", 01:03:07.811 "bdev_lvol_delete_lvstore", 01:03:07.811 "bdev_lvol_rename_lvstore", 01:03:07.811 "bdev_lvol_create_lvstore", 01:03:07.811 "bdev_raid_set_options", 01:03:07.811 "bdev_raid_remove_base_bdev", 01:03:07.811 "bdev_raid_add_base_bdev", 01:03:07.811 "bdev_raid_delete", 01:03:07.812 "bdev_raid_create", 01:03:07.812 "bdev_raid_get_bdevs", 01:03:07.812 "bdev_error_inject_error", 01:03:07.812 "bdev_error_delete", 01:03:07.812 "bdev_error_create", 01:03:07.812 "bdev_split_delete", 01:03:07.812 "bdev_split_create", 01:03:07.812 "bdev_delay_delete", 01:03:07.812 "bdev_delay_create", 01:03:07.812 "bdev_delay_update_latency", 01:03:07.812 "bdev_zone_block_delete", 01:03:07.812 "bdev_zone_block_create", 01:03:07.812 "blobfs_create", 01:03:07.812 "blobfs_detect", 01:03:07.812 "blobfs_set_cache_size", 01:03:07.812 "bdev_aio_delete", 01:03:07.812 "bdev_aio_rescan", 01:03:07.812 "bdev_aio_create", 01:03:07.812 "bdev_ftl_set_property", 01:03:07.812 "bdev_ftl_get_properties", 01:03:07.812 "bdev_ftl_get_stats", 01:03:07.812 "bdev_ftl_unmap", 01:03:07.812 "bdev_ftl_unload", 01:03:07.812 "bdev_ftl_delete", 01:03:07.812 "bdev_ftl_load", 01:03:07.812 "bdev_ftl_create", 01:03:07.812 "bdev_virtio_attach_controller", 01:03:07.812 "bdev_virtio_scsi_get_devices", 01:03:07.812 "bdev_virtio_detach_controller", 01:03:07.812 "bdev_virtio_blk_set_hotplug", 01:03:07.812 "bdev_iscsi_delete", 01:03:07.812 "bdev_iscsi_create", 01:03:07.812 "bdev_iscsi_set_options", 01:03:07.812 "bdev_uring_delete", 01:03:07.812 "bdev_uring_rescan", 01:03:07.812 "bdev_uring_create", 01:03:07.812 "accel_error_inject_error", 01:03:07.812 "ioat_scan_accel_module", 01:03:07.812 "dsa_scan_accel_module", 01:03:07.812 "iaa_scan_accel_module", 01:03:07.812 "keyring_file_remove_key", 01:03:07.812 "keyring_file_add_key", 01:03:07.812 "keyring_linux_set_options", 01:03:07.812 "iscsi_get_histogram", 01:03:07.812 "iscsi_enable_histogram", 01:03:07.812 "iscsi_set_options", 01:03:07.812 "iscsi_get_auth_groups", 01:03:07.812 "iscsi_auth_group_remove_secret", 01:03:07.812 "iscsi_auth_group_add_secret", 01:03:07.812 "iscsi_delete_auth_group", 01:03:07.812 "iscsi_create_auth_group", 01:03:07.812 "iscsi_set_discovery_auth", 01:03:07.812 "iscsi_get_options", 01:03:07.812 "iscsi_target_node_request_logout", 01:03:07.812 "iscsi_target_node_set_redirect", 01:03:07.812 "iscsi_target_node_set_auth", 01:03:07.812 "iscsi_target_node_add_lun", 01:03:07.812 "iscsi_get_stats", 01:03:07.812 "iscsi_get_connections", 01:03:07.812 "iscsi_portal_group_set_auth", 01:03:07.812 "iscsi_start_portal_group", 01:03:07.812 "iscsi_delete_portal_group", 01:03:07.812 "iscsi_create_portal_group", 01:03:07.812 "iscsi_get_portal_groups", 01:03:07.812 "iscsi_delete_target_node", 01:03:07.812 "iscsi_target_node_remove_pg_ig_maps", 01:03:07.812 "iscsi_target_node_add_pg_ig_maps", 01:03:07.812 "iscsi_create_target_node", 01:03:07.812 "iscsi_get_target_nodes", 01:03:07.812 "iscsi_delete_initiator_group", 01:03:07.812 "iscsi_initiator_group_remove_initiators", 01:03:07.812 "iscsi_initiator_group_add_initiators", 01:03:07.812 "iscsi_create_initiator_group", 01:03:07.812 "iscsi_get_initiator_groups", 01:03:07.812 "nvmf_set_crdt", 01:03:07.812 "nvmf_set_config", 01:03:07.812 "nvmf_set_max_subsystems", 01:03:07.812 "nvmf_stop_mdns_prr", 01:03:07.812 "nvmf_publish_mdns_prr", 01:03:07.812 "nvmf_subsystem_get_listeners", 01:03:07.812 "nvmf_subsystem_get_qpairs", 01:03:07.812 "nvmf_subsystem_get_controllers", 01:03:07.812 "nvmf_get_stats", 01:03:07.812 "nvmf_get_transports", 01:03:07.812 "nvmf_create_transport", 01:03:07.812 "nvmf_get_targets", 01:03:07.812 "nvmf_delete_target", 01:03:07.812 "nvmf_create_target", 01:03:07.812 "nvmf_subsystem_allow_any_host", 01:03:07.812 "nvmf_subsystem_remove_host", 01:03:07.812 "nvmf_subsystem_add_host", 01:03:07.812 "nvmf_ns_remove_host", 01:03:07.812 "nvmf_ns_add_host", 01:03:07.812 "nvmf_subsystem_remove_ns", 01:03:07.812 "nvmf_subsystem_add_ns", 01:03:07.812 "nvmf_subsystem_listener_set_ana_state", 01:03:07.812 "nvmf_discovery_get_referrals", 01:03:07.812 "nvmf_discovery_remove_referral", 01:03:07.812 "nvmf_discovery_add_referral", 01:03:07.812 "nvmf_subsystem_remove_listener", 01:03:07.812 "nvmf_subsystem_add_listener", 01:03:07.812 "nvmf_delete_subsystem", 01:03:07.812 "nvmf_create_subsystem", 01:03:07.812 "nvmf_get_subsystems", 01:03:07.812 "env_dpdk_get_mem_stats", 01:03:07.812 "nbd_get_disks", 01:03:07.812 "nbd_stop_disk", 01:03:07.812 "nbd_start_disk", 01:03:07.812 "ublk_recover_disk", 01:03:07.812 "ublk_get_disks", 01:03:07.812 "ublk_stop_disk", 01:03:07.812 "ublk_start_disk", 01:03:07.812 "ublk_destroy_target", 01:03:07.812 "ublk_create_target", 01:03:07.812 "virtio_blk_create_transport", 01:03:07.812 "virtio_blk_get_transports", 01:03:07.812 "vhost_controller_set_coalescing", 01:03:07.812 "vhost_get_controllers", 01:03:07.812 "vhost_delete_controller", 01:03:07.812 "vhost_create_blk_controller", 01:03:07.812 "vhost_scsi_controller_remove_target", 01:03:07.812 "vhost_scsi_controller_add_target", 01:03:07.812 "vhost_start_scsi_controller", 01:03:07.812 "vhost_create_scsi_controller", 01:03:07.812 "thread_set_cpumask", 01:03:07.812 "framework_get_governor", 01:03:07.812 "framework_get_scheduler", 01:03:07.812 "framework_set_scheduler", 01:03:07.812 "framework_get_reactors", 01:03:07.812 "thread_get_io_channels", 01:03:07.812 "thread_get_pollers", 01:03:07.812 "thread_get_stats", 01:03:07.812 "framework_monitor_context_switch", 01:03:07.812 "spdk_kill_instance", 01:03:07.812 "log_enable_timestamps", 01:03:07.812 "log_get_flags", 01:03:07.812 "log_clear_flag", 01:03:07.812 "log_set_flag", 01:03:07.812 "log_get_level", 01:03:07.812 "log_set_level", 01:03:07.812 "log_get_print_level", 01:03:07.812 "log_set_print_level", 01:03:07.812 "framework_enable_cpumask_locks", 01:03:07.812 "framework_disable_cpumask_locks", 01:03:07.812 "framework_wait_init", 01:03:07.812 "framework_start_init", 01:03:07.812 "scsi_get_devices", 01:03:07.812 "bdev_get_histogram", 01:03:07.812 "bdev_enable_histogram", 01:03:07.812 "bdev_set_qos_limit", 01:03:07.812 "bdev_set_qd_sampling_period", 01:03:07.812 "bdev_get_bdevs", 01:03:07.812 "bdev_reset_iostat", 01:03:07.812 "bdev_get_iostat", 01:03:07.812 "bdev_examine", 01:03:07.812 "bdev_wait_for_examine", 01:03:07.812 "bdev_set_options", 01:03:07.812 "notify_get_notifications", 01:03:07.812 "notify_get_types", 01:03:07.812 "accel_get_stats", 01:03:07.812 "accel_set_options", 01:03:07.812 "accel_set_driver", 01:03:07.812 "accel_crypto_key_destroy", 01:03:07.812 "accel_crypto_keys_get", 01:03:07.812 "accel_crypto_key_create", 01:03:07.812 "accel_assign_opc", 01:03:07.812 "accel_get_module_info", 01:03:07.812 "accel_get_opc_assignments", 01:03:07.812 "vmd_rescan", 01:03:07.812 "vmd_remove_device", 01:03:07.812 "vmd_enable", 01:03:07.812 "sock_get_default_impl", 01:03:07.812 "sock_set_default_impl", 01:03:07.812 "sock_impl_set_options", 01:03:07.812 "sock_impl_get_options", 01:03:07.812 "iobuf_get_stats", 01:03:07.812 "iobuf_set_options", 01:03:07.812 "framework_get_pci_devices", 01:03:07.812 "framework_get_config", 01:03:07.812 "framework_get_subsystems", 01:03:07.812 "trace_get_info", 01:03:07.812 "trace_get_tpoint_group_mask", 01:03:07.812 "trace_disable_tpoint_group", 01:03:07.812 "trace_enable_tpoint_group", 01:03:07.812 "trace_clear_tpoint_mask", 01:03:07.812 "trace_set_tpoint_mask", 01:03:07.812 "keyring_get_keys", 01:03:07.812 "spdk_get_version", 01:03:07.812 "rpc_get_methods" 01:03:07.812 ] 01:03:07.812 11:00:12 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 01:03:07.812 11:00:12 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 01:03:07.812 11:00:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 01:03:08.070 11:00:13 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 01:03:08.070 11:00:13 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 72286 01:03:08.070 11:00:13 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 72286 ']' 01:03:08.070 11:00:13 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 72286 01:03:08.070 11:00:13 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 01:03:08.070 11:00:13 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:03:08.070 11:00:13 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72286 01:03:08.070 killing process with pid 72286 01:03:08.070 11:00:13 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:03:08.070 11:00:13 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:03:08.070 11:00:13 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72286' 01:03:08.070 11:00:13 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 72286 01:03:08.070 11:00:13 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 72286 01:03:08.636 ************************************ 01:03:08.636 END TEST spdkcli_tcp 01:03:08.636 ************************************ 01:03:08.636 01:03:08.636 real 0m1.890s 01:03:08.636 user 0m3.206s 01:03:08.636 sys 0m0.610s 01:03:08.636 11:00:13 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 01:03:08.636 11:00:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 01:03:08.636 11:00:13 -- common/autotest_common.sh@1142 -- # return 0 01:03:08.636 11:00:13 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 01:03:08.636 11:00:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:03:08.636 11:00:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:03:08.636 11:00:13 -- common/autotest_common.sh@10 -- # set +x 01:03:08.636 ************************************ 01:03:08.636 START TEST dpdk_mem_utility 01:03:08.636 ************************************ 01:03:08.636 11:00:13 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 01:03:08.636 * Looking for test storage... 01:03:08.636 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 01:03:08.636 11:00:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 01:03:08.636 11:00:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=72377 01:03:08.636 11:00:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:03:08.636 11:00:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 72377 01:03:08.636 11:00:13 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 72377 ']' 01:03:08.636 11:00:13 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:08.636 11:00:13 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 01:03:08.636 11:00:13 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:08.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:08.636 11:00:13 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 01:03:08.636 11:00:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 01:03:08.894 [2024-07-22 11:00:13.891417] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:03:08.894 [2024-07-22 11:00:13.892021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72377 ] 01:03:08.894 [2024-07-22 11:00:14.033982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:09.152 [2024-07-22 11:00:14.103984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:03:09.152 [2024-07-22 11:00:14.176609] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:03:09.719 11:00:14 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:03:09.719 11:00:14 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 01:03:09.719 11:00:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 01:03:09.719 11:00:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 01:03:09.719 11:00:14 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:09.719 11:00:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 01:03:09.719 { 01:03:09.719 "filename": "/tmp/spdk_mem_dump.txt" 01:03:09.719 } 01:03:09.719 11:00:14 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:09.719 11:00:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 01:03:09.719 DPDK memory size 814.000000 MiB in 1 heap(s) 01:03:09.719 1 heaps totaling size 814.000000 MiB 01:03:09.719 size: 814.000000 MiB heap id: 0 01:03:09.719 end heaps---------- 01:03:09.719 8 mempools totaling size 598.116089 MiB 01:03:09.719 size: 212.674988 MiB name: PDU_immediate_data_Pool 01:03:09.719 size: 158.602051 MiB name: PDU_data_out_Pool 01:03:09.719 size: 84.521057 MiB name: bdev_io_72377 01:03:09.719 size: 51.011292 MiB name: evtpool_72377 01:03:09.719 size: 50.003479 MiB name: msgpool_72377 01:03:09.719 size: 21.763794 MiB name: PDU_Pool 01:03:09.719 size: 19.513306 MiB name: SCSI_TASK_Pool 01:03:09.719 size: 0.026123 MiB name: Session_Pool 01:03:09.719 end mempools------- 01:03:09.719 6 memzones totaling size 4.142822 MiB 01:03:09.719 size: 1.000366 MiB name: RG_ring_0_72377 01:03:09.719 size: 1.000366 MiB name: RG_ring_1_72377 01:03:09.719 size: 1.000366 MiB name: RG_ring_4_72377 01:03:09.719 size: 1.000366 MiB name: RG_ring_5_72377 01:03:09.719 size: 0.125366 MiB name: RG_ring_2_72377 01:03:09.719 size: 0.015991 MiB name: RG_ring_3_72377 01:03:09.719 end memzones------- 01:03:09.719 11:00:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 01:03:09.719 heap id: 0 total size: 814.000000 MiB number of busy elements: 298 number of free elements: 15 01:03:09.719 list of free elements. size: 12.472290 MiB 01:03:09.719 element at address: 0x200000400000 with size: 1.999512 MiB 01:03:09.719 element at address: 0x200018e00000 with size: 0.999878 MiB 01:03:09.719 element at address: 0x200019000000 with size: 0.999878 MiB 01:03:09.719 element at address: 0x200003e00000 with size: 0.996277 MiB 01:03:09.719 element at address: 0x200031c00000 with size: 0.994446 MiB 01:03:09.719 element at address: 0x200013800000 with size: 0.978699 MiB 01:03:09.719 element at address: 0x200007000000 with size: 0.959839 MiB 01:03:09.719 element at address: 0x200019200000 with size: 0.936584 MiB 01:03:09.719 element at address: 0x200000200000 with size: 0.833191 MiB 01:03:09.719 element at address: 0x20001aa00000 with size: 0.568604 MiB 01:03:09.719 element at address: 0x20000b200000 with size: 0.489624 MiB 01:03:09.719 element at address: 0x200000800000 with size: 0.486145 MiB 01:03:09.719 element at address: 0x200019400000 with size: 0.485657 MiB 01:03:09.719 element at address: 0x200027e00000 with size: 0.396118 MiB 01:03:09.719 element at address: 0x200003a00000 with size: 0.347839 MiB 01:03:09.719 list of standard malloc elements. size: 199.265137 MiB 01:03:09.719 element at address: 0x20000b3fff80 with size: 132.000122 MiB 01:03:09.719 element at address: 0x2000071fff80 with size: 64.000122 MiB 01:03:09.719 element at address: 0x200018efff80 with size: 1.000122 MiB 01:03:09.719 element at address: 0x2000190fff80 with size: 1.000122 MiB 01:03:09.719 element at address: 0x2000192fff80 with size: 1.000122 MiB 01:03:09.719 element at address: 0x2000003d9f00 with size: 0.140747 MiB 01:03:09.719 element at address: 0x2000192eff00 with size: 0.062622 MiB 01:03:09.719 element at address: 0x2000003fdf80 with size: 0.007935 MiB 01:03:09.719 element at address: 0x2000192efdc0 with size: 0.000305 MiB 01:03:09.719 element at address: 0x2000002d54c0 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d5580 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d5640 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d5700 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d57c0 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d5880 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d5940 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d5a00 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d5b80 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d5c40 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d5d00 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d5e80 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d5f40 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d6000 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d60c0 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d6180 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d6240 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d6300 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d63c0 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d6480 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d6540 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d6600 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d66c0 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d68c0 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d6980 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d6a40 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d6b00 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d6c80 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d6d40 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d6e00 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d6f80 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d7040 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d7100 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d71c0 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d7280 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d7340 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d7400 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d74c0 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d7580 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d7640 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d7700 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d77c0 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d7880 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d7940 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d7a00 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d7b80 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000002d7c40 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000003d9e40 with size: 0.000183 MiB 01:03:09.719 element at address: 0x20000087c740 with size: 0.000183 MiB 01:03:09.719 element at address: 0x20000087c800 with size: 0.000183 MiB 01:03:09.719 element at address: 0x20000087c8c0 with size: 0.000183 MiB 01:03:09.719 element at address: 0x20000087c980 with size: 0.000183 MiB 01:03:09.719 element at address: 0x20000087ca40 with size: 0.000183 MiB 01:03:09.719 element at address: 0x20000087cb00 with size: 0.000183 MiB 01:03:09.719 element at address: 0x20000087cbc0 with size: 0.000183 MiB 01:03:09.719 element at address: 0x20000087cc80 with size: 0.000183 MiB 01:03:09.719 element at address: 0x20000087cd40 with size: 0.000183 MiB 01:03:09.719 element at address: 0x20000087ce00 with size: 0.000183 MiB 01:03:09.719 element at address: 0x20000087cec0 with size: 0.000183 MiB 01:03:09.719 element at address: 0x2000008fd180 with size: 0.000183 MiB 01:03:09.719 element at address: 0x200003a590c0 with size: 0.000183 MiB 01:03:09.719 element at address: 0x200003a59180 with size: 0.000183 MiB 01:03:09.719 element at address: 0x200003a59240 with size: 0.000183 MiB 01:03:09.719 element at address: 0x200003a59300 with size: 0.000183 MiB 01:03:09.719 element at address: 0x200003a593c0 with size: 0.000183 MiB 01:03:09.719 element at address: 0x200003a59480 with size: 0.000183 MiB 01:03:09.719 element at address: 0x200003a59540 with size: 0.000183 MiB 01:03:09.719 element at address: 0x200003a59600 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a596c0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a59780 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a59840 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a59900 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a599c0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a59a80 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a59b40 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a59c00 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a59cc0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a59d80 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a59e40 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a59f00 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a59fc0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a5a080 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a5a140 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a5a200 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a5a380 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a5a440 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a5a500 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a5a680 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a5a740 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a5a800 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a5a980 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a5aa40 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a5ab00 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a5abc0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a5ac80 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a5ad40 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a5ae00 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a5aec0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a5af80 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003a5b040 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003adb300 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003adb500 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003adf7c0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003affa80 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003affb40 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200003eff0c0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x2000070fdd80 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20000b27d580 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20000b27d640 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20000b27d700 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20000b27d880 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20000b27d940 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20000b27da00 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20000b27dac0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 01:03:09.720 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x2000192efc40 with size: 0.000183 MiB 01:03:09.720 element at address: 0x2000192efd00 with size: 0.000183 MiB 01:03:09.720 element at address: 0x2000194bc740 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa91900 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa919c0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa91a80 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa91b40 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa91c00 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa91d80 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa91e40 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa91f00 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa92080 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa92140 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa92200 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa922c0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa92380 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa92440 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa92500 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa925c0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa92680 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa92740 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa92800 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa928c0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa92980 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa92a40 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa92b00 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa92c80 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa92d40 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa92e00 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa92f80 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa93040 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa93100 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa931c0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa93280 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa93340 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa93400 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa934c0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa93580 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa93640 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa93700 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa937c0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa93880 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa93940 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa93a00 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa93b80 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa93c40 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa93d00 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa93e80 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa93f40 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa94000 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa940c0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa94180 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa94240 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa94300 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa943c0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa94480 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa94540 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa94600 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa946c0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa94780 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa94840 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa94900 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa949c0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa94a80 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa94b40 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa94c00 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa94d80 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa94e40 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa94f00 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa95080 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa95140 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa95200 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa952c0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa95380 with size: 0.000183 MiB 01:03:09.720 element at address: 0x20001aa95440 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200027e65680 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200027e65740 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200027e6c340 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200027e6c540 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200027e6c600 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200027e6c780 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200027e6c840 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200027e6c900 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200027e6ca80 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200027e6cb40 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200027e6cc00 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200027e6cd80 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200027e6ce40 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200027e6cf00 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200027e6d080 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200027e6d140 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200027e6d200 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200027e6d380 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200027e6d440 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200027e6d500 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200027e6d680 with size: 0.000183 MiB 01:03:09.720 element at address: 0x200027e6d740 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6d800 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6d980 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6da40 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6db00 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6dc80 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6dd40 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6de00 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6dec0 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6df80 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6e040 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6e100 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6e280 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6e340 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6e400 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6e580 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6e640 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6e700 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6e880 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6e940 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6ea00 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6eac0 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6eb80 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6ec40 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6ed00 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6edc0 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6ee80 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6ef40 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6f000 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6f180 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6f240 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6f300 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6f480 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6f540 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6f600 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6f780 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6f840 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6f900 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6fa80 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6fb40 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6fc00 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6fd80 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6fe40 with size: 0.000183 MiB 01:03:09.721 element at address: 0x200027e6ff00 with size: 0.000183 MiB 01:03:09.721 list of memzone associated elements. size: 602.262573 MiB 01:03:09.721 element at address: 0x20001aa95500 with size: 211.416748 MiB 01:03:09.721 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 01:03:09.721 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 01:03:09.721 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 01:03:09.721 element at address: 0x2000139fab80 with size: 84.020630 MiB 01:03:09.721 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_72377_0 01:03:09.721 element at address: 0x2000009ff380 with size: 48.003052 MiB 01:03:09.721 associated memzone info: size: 48.002930 MiB name: MP_evtpool_72377_0 01:03:09.721 element at address: 0x200003fff380 with size: 48.003052 MiB 01:03:09.721 associated memzone info: size: 48.002930 MiB name: MP_msgpool_72377_0 01:03:09.721 element at address: 0x2000195be940 with size: 20.255554 MiB 01:03:09.721 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 01:03:09.721 element at address: 0x200031dfeb40 with size: 18.005066 MiB 01:03:09.721 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 01:03:09.721 element at address: 0x2000005ffe00 with size: 2.000488 MiB 01:03:09.721 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_72377 01:03:09.721 element at address: 0x200003bffe00 with size: 2.000488 MiB 01:03:09.721 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_72377 01:03:09.721 element at address: 0x2000002d7d00 with size: 1.008118 MiB 01:03:09.721 associated memzone info: size: 1.007996 MiB name: MP_evtpool_72377 01:03:09.721 element at address: 0x20000b2fde40 with size: 1.008118 MiB 01:03:09.721 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 01:03:09.721 element at address: 0x2000194bc800 with size: 1.008118 MiB 01:03:09.721 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 01:03:09.721 element at address: 0x2000070fde40 with size: 1.008118 MiB 01:03:09.721 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 01:03:09.721 element at address: 0x2000008fd240 with size: 1.008118 MiB 01:03:09.721 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 01:03:09.721 element at address: 0x200003eff180 with size: 1.000488 MiB 01:03:09.721 associated memzone info: size: 1.000366 MiB name: RG_ring_0_72377 01:03:09.721 element at address: 0x200003affc00 with size: 1.000488 MiB 01:03:09.721 associated memzone info: size: 1.000366 MiB name: RG_ring_1_72377 01:03:09.721 element at address: 0x2000138fa980 with size: 1.000488 MiB 01:03:09.721 associated memzone info: size: 1.000366 MiB name: RG_ring_4_72377 01:03:09.721 element at address: 0x200031cfe940 with size: 1.000488 MiB 01:03:09.721 associated memzone info: size: 1.000366 MiB name: RG_ring_5_72377 01:03:09.721 element at address: 0x200003a5b100 with size: 0.500488 MiB 01:03:09.721 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_72377 01:03:09.721 element at address: 0x20000b27db80 with size: 0.500488 MiB 01:03:09.721 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 01:03:09.721 element at address: 0x20000087cf80 with size: 0.500488 MiB 01:03:09.721 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 01:03:09.721 element at address: 0x20001947c540 with size: 0.250488 MiB 01:03:09.721 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 01:03:09.721 element at address: 0x200003adf880 with size: 0.125488 MiB 01:03:09.721 associated memzone info: size: 0.125366 MiB name: RG_ring_2_72377 01:03:09.721 element at address: 0x2000070f5b80 with size: 0.031738 MiB 01:03:09.721 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 01:03:09.721 element at address: 0x200027e65800 with size: 0.023743 MiB 01:03:09.721 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 01:03:09.721 element at address: 0x200003adb5c0 with size: 0.016113 MiB 01:03:09.721 associated memzone info: size: 0.015991 MiB name: RG_ring_3_72377 01:03:09.721 element at address: 0x200027e6b940 with size: 0.002441 MiB 01:03:09.721 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 01:03:09.721 element at address: 0x2000002d6780 with size: 0.000305 MiB 01:03:09.721 associated memzone info: size: 0.000183 MiB name: MP_msgpool_72377 01:03:09.721 element at address: 0x200003adb3c0 with size: 0.000305 MiB 01:03:09.721 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_72377 01:03:09.721 element at address: 0x200027e6c400 with size: 0.000305 MiB 01:03:09.721 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 01:03:09.721 11:00:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 01:03:09.721 11:00:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 72377 01:03:09.721 11:00:14 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 72377 ']' 01:03:09.721 11:00:14 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 72377 01:03:09.721 11:00:14 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 01:03:09.721 11:00:14 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:03:09.721 11:00:14 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72377 01:03:09.721 11:00:14 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:03:09.721 11:00:14 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:03:09.721 11:00:14 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72377' 01:03:09.721 killing process with pid 72377 01:03:09.721 11:00:14 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 72377 01:03:09.721 11:00:14 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 72377 01:03:10.293 01:03:10.293 real 0m1.724s 01:03:10.293 user 0m1.587s 01:03:10.293 sys 0m0.567s 01:03:10.293 ************************************ 01:03:10.293 END TEST dpdk_mem_utility 01:03:10.293 ************************************ 01:03:10.293 11:00:15 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 01:03:10.293 11:00:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 01:03:10.293 11:00:15 -- common/autotest_common.sh@1142 -- # return 0 01:03:10.293 11:00:15 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 01:03:10.293 11:00:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:03:10.293 11:00:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:03:10.293 11:00:15 -- common/autotest_common.sh@10 -- # set +x 01:03:10.557 ************************************ 01:03:10.557 START TEST event 01:03:10.557 ************************************ 01:03:10.557 11:00:15 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 01:03:10.557 * Looking for test storage... 01:03:10.557 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 01:03:10.557 11:00:15 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 01:03:10.557 11:00:15 event -- bdev/nbd_common.sh@6 -- # set -e 01:03:10.557 11:00:15 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 01:03:10.557 11:00:15 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 01:03:10.557 11:00:15 event -- common/autotest_common.sh@1105 -- # xtrace_disable 01:03:10.557 11:00:15 event -- common/autotest_common.sh@10 -- # set +x 01:03:10.557 ************************************ 01:03:10.557 START TEST event_perf 01:03:10.557 ************************************ 01:03:10.557 11:00:15 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 01:03:10.557 Running I/O for 1 seconds...[2024-07-22 11:00:15.670684] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:03:10.557 [2024-07-22 11:00:15.670786] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72454 ] 01:03:10.815 [2024-07-22 11:00:15.816570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:03:10.815 [2024-07-22 11:00:15.896824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:03:10.815 [2024-07-22 11:00:15.897014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:03:10.815 [2024-07-22 11:00:15.897128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:03:10.815 [2024-07-22 11:00:15.897135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:03:11.753 Running I/O for 1 seconds... 01:03:11.753 lcore 0: 201705 01:03:11.753 lcore 1: 201704 01:03:11.753 lcore 2: 201703 01:03:11.753 lcore 3: 201704 01:03:11.753 done. 01:03:12.011 01:03:12.011 real 0m1.320s 01:03:12.011 user 0m4.108s 01:03:12.011 sys 0m0.087s 01:03:12.011 11:00:16 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 01:03:12.011 11:00:16 event.event_perf -- common/autotest_common.sh@10 -- # set +x 01:03:12.011 ************************************ 01:03:12.011 END TEST event_perf 01:03:12.011 ************************************ 01:03:12.011 11:00:17 event -- common/autotest_common.sh@1142 -- # return 0 01:03:12.011 11:00:17 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 01:03:12.011 11:00:17 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 01:03:12.011 11:00:17 event -- common/autotest_common.sh@1105 -- # xtrace_disable 01:03:12.011 11:00:17 event -- common/autotest_common.sh@10 -- # set +x 01:03:12.011 ************************************ 01:03:12.011 START TEST event_reactor 01:03:12.011 ************************************ 01:03:12.011 11:00:17 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 01:03:12.011 [2024-07-22 11:00:17.046721] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:03:12.011 [2024-07-22 11:00:17.047048] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72487 ] 01:03:12.011 [2024-07-22 11:00:17.193307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:12.268 [2024-07-22 11:00:17.242619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:03:13.203 test_start 01:03:13.203 oneshot 01:03:13.203 tick 100 01:03:13.203 tick 100 01:03:13.203 tick 250 01:03:13.203 tick 100 01:03:13.203 tick 100 01:03:13.203 tick 100 01:03:13.203 tick 250 01:03:13.203 tick 500 01:03:13.203 tick 100 01:03:13.203 tick 100 01:03:13.203 tick 250 01:03:13.203 tick 100 01:03:13.203 tick 100 01:03:13.203 test_end 01:03:13.203 01:03:13.203 real 0m1.286s 01:03:13.203 user 0m1.112s 01:03:13.203 sys 0m0.067s 01:03:13.203 11:00:18 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 01:03:13.203 ************************************ 01:03:13.203 END TEST event_reactor 01:03:13.203 ************************************ 01:03:13.203 11:00:18 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 01:03:13.203 11:00:18 event -- common/autotest_common.sh@1142 -- # return 0 01:03:13.203 11:00:18 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 01:03:13.203 11:00:18 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 01:03:13.204 11:00:18 event -- common/autotest_common.sh@1105 -- # xtrace_disable 01:03:13.204 11:00:18 event -- common/autotest_common.sh@10 -- # set +x 01:03:13.204 ************************************ 01:03:13.204 START TEST event_reactor_perf 01:03:13.204 ************************************ 01:03:13.204 11:00:18 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 01:03:13.204 [2024-07-22 11:00:18.398352] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:03:13.204 [2024-07-22 11:00:18.398501] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72522 ] 01:03:13.462 [2024-07-22 11:00:18.543419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:13.462 [2024-07-22 11:00:18.591666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:03:14.837 test_start 01:03:14.837 test_end 01:03:14.837 Performance: 477752 events per second 01:03:14.837 01:03:14.837 real 0m1.282s 01:03:14.837 user 0m1.117s 01:03:14.837 sys 0m0.059s 01:03:14.837 ************************************ 01:03:14.837 11:00:19 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 01:03:14.837 11:00:19 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 01:03:14.837 END TEST event_reactor_perf 01:03:14.837 ************************************ 01:03:14.837 11:00:19 event -- common/autotest_common.sh@1142 -- # return 0 01:03:14.837 11:00:19 event -- event/event.sh@49 -- # uname -s 01:03:14.837 11:00:19 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 01:03:14.837 11:00:19 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 01:03:14.837 11:00:19 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:03:14.837 11:00:19 event -- common/autotest_common.sh@1105 -- # xtrace_disable 01:03:14.837 11:00:19 event -- common/autotest_common.sh@10 -- # set +x 01:03:14.837 ************************************ 01:03:14.837 START TEST event_scheduler 01:03:14.837 ************************************ 01:03:14.837 11:00:19 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 01:03:14.837 * Looking for test storage... 01:03:14.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 01:03:14.837 11:00:19 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 01:03:14.837 11:00:19 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=72584 01:03:14.837 11:00:19 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 01:03:14.837 11:00:19 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 01:03:14.837 11:00:19 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 72584 01:03:14.837 11:00:19 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 72584 ']' 01:03:14.837 11:00:19 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:14.837 11:00:19 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 01:03:14.837 11:00:19 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:14.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:14.837 11:00:19 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 01:03:14.837 11:00:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:03:14.837 [2024-07-22 11:00:19.906271] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:03:14.837 [2024-07-22 11:00:19.906955] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72584 ] 01:03:15.095 [2024-07-22 11:00:20.050264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:03:15.095 [2024-07-22 11:00:20.097732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:03:15.095 [2024-07-22 11:00:20.097927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:03:15.095 [2024-07-22 11:00:20.098095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:03:15.095 [2024-07-22 11:00:20.098100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:03:15.705 11:00:20 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:03:15.705 11:00:20 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 01:03:15.705 11:00:20 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 01:03:15.705 11:00:20 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:15.705 11:00:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:03:15.705 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 01:03:15.705 POWER: Cannot set governor of lcore 0 to userspace 01:03:15.705 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 01:03:15.705 POWER: Cannot set governor of lcore 0 to performance 01:03:15.705 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 01:03:15.705 POWER: Cannot set governor of lcore 0 to userspace 01:03:15.705 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 01:03:15.705 POWER: Cannot set governor of lcore 0 to userspace 01:03:15.705 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 01:03:15.705 POWER: Unable to set Power Management Environment for lcore 0 01:03:15.705 [2024-07-22 11:00:20.748346] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 01:03:15.705 [2024-07-22 11:00:20.748414] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 01:03:15.705 [2024-07-22 11:00:20.748480] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 01:03:15.705 [2024-07-22 11:00:20.748549] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 01:03:15.705 [2024-07-22 11:00:20.748710] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 01:03:15.705 [2024-07-22 11:00:20.748807] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 01:03:15.705 11:00:20 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:15.705 11:00:20 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 01:03:15.705 11:00:20 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:15.705 11:00:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:03:15.705 [2024-07-22 11:00:20.799282] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:03:15.705 [2024-07-22 11:00:20.820256] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 01:03:15.705 11:00:20 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:15.705 11:00:20 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 01:03:15.705 11:00:20 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:03:15.705 11:00:20 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 01:03:15.705 11:00:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:03:15.705 ************************************ 01:03:15.705 START TEST scheduler_create_thread 01:03:15.705 ************************************ 01:03:15.705 11:00:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 01:03:15.705 11:00:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 01:03:15.705 11:00:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:15.705 11:00:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:03:15.705 2 01:03:15.705 11:00:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:15.705 11:00:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 01:03:15.705 11:00:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:15.705 11:00:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:03:15.705 3 01:03:15.705 11:00:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:15.705 11:00:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 01:03:15.705 11:00:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:15.705 11:00:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:03:15.705 4 01:03:15.705 11:00:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:15.705 11:00:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 01:03:15.705 11:00:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:15.705 11:00:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:03:15.963 5 01:03:15.963 11:00:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:15.963 11:00:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 01:03:15.963 11:00:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:15.963 11:00:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:03:15.963 6 01:03:15.963 11:00:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:15.963 11:00:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 01:03:15.963 11:00:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:15.963 11:00:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:03:15.963 7 01:03:15.963 11:00:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:15.963 11:00:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 01:03:15.963 11:00:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:15.963 11:00:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:03:15.963 8 01:03:15.963 11:00:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:15.963 11:00:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 01:03:15.963 11:00:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:15.963 11:00:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:03:15.963 9 01:03:15.963 11:00:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:15.963 11:00:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 01:03:15.963 11:00:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:15.963 11:00:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:03:15.963 10 01:03:15.963 11:00:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:15.963 11:00:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 01:03:15.963 11:00:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:15.963 11:00:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:03:17.334 11:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:17.334 11:00:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 01:03:17.334 11:00:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 01:03:17.334 11:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:17.334 11:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:03:17.899 11:00:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:17.899 11:00:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 01:03:17.899 11:00:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:17.899 11:00:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:03:18.831 11:00:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:18.831 11:00:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 01:03:18.831 11:00:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 01:03:18.831 11:00:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:18.831 11:00:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:03:19.765 ************************************ 01:03:19.765 END TEST scheduler_create_thread 01:03:19.765 ************************************ 01:03:19.765 11:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:19.765 01:03:19.765 real 0m3.880s 01:03:19.765 user 0m0.029s 01:03:19.765 sys 0m0.005s 01:03:19.765 11:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 01:03:19.765 11:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:03:19.765 11:00:24 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 01:03:19.765 11:00:24 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 01:03:19.765 11:00:24 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 72584 01:03:19.765 11:00:24 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 72584 ']' 01:03:19.765 11:00:24 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 72584 01:03:19.765 11:00:24 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 01:03:19.765 11:00:24 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:03:19.765 11:00:24 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72584 01:03:19.765 killing process with pid 72584 01:03:19.765 11:00:24 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:03:19.765 11:00:24 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:03:19.765 11:00:24 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72584' 01:03:19.765 11:00:24 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 72584 01:03:19.765 11:00:24 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 72584 01:03:20.023 [2024-07-22 11:00:25.095509] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 01:03:20.282 01:03:20.282 real 0m5.636s 01:03:20.282 user 0m11.919s 01:03:20.282 sys 0m0.391s 01:03:20.282 11:00:25 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 01:03:20.282 11:00:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:03:20.282 ************************************ 01:03:20.282 END TEST event_scheduler 01:03:20.282 ************************************ 01:03:20.282 11:00:25 event -- common/autotest_common.sh@1142 -- # return 0 01:03:20.282 11:00:25 event -- event/event.sh@51 -- # modprobe -n nbd 01:03:20.282 11:00:25 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 01:03:20.282 11:00:25 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:03:20.282 11:00:25 event -- common/autotest_common.sh@1105 -- # xtrace_disable 01:03:20.282 11:00:25 event -- common/autotest_common.sh@10 -- # set +x 01:03:20.282 ************************************ 01:03:20.282 START TEST app_repeat 01:03:20.282 ************************************ 01:03:20.282 11:00:25 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 01:03:20.282 11:00:25 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:03:20.282 11:00:25 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:03:20.282 11:00:25 event.app_repeat -- event/event.sh@13 -- # local nbd_list 01:03:20.282 11:00:25 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 01:03:20.282 11:00:25 event.app_repeat -- event/event.sh@14 -- # local bdev_list 01:03:20.282 11:00:25 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 01:03:20.282 11:00:25 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 01:03:20.282 11:00:25 event.app_repeat -- event/event.sh@19 -- # repeat_pid=72689 01:03:20.282 11:00:25 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 01:03:20.282 11:00:25 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 01:03:20.282 11:00:25 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 72689' 01:03:20.282 Process app_repeat pid: 72689 01:03:20.282 11:00:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 01:03:20.282 spdk_app_start Round 0 01:03:20.282 11:00:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 01:03:20.282 11:00:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72689 /var/tmp/spdk-nbd.sock 01:03:20.282 11:00:25 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 72689 ']' 01:03:20.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:03:20.282 11:00:25 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:03:20.282 11:00:25 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 01:03:20.282 11:00:25 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:03:20.282 11:00:25 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 01:03:20.282 11:00:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:03:20.282 [2024-07-22 11:00:25.482881] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:03:20.282 [2024-07-22 11:00:25.482952] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72689 ] 01:03:20.540 [2024-07-22 11:00:25.627095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 01:03:20.540 [2024-07-22 11:00:25.671404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:03:20.540 [2024-07-22 11:00:25.671416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:03:20.540 [2024-07-22 11:00:25.712491] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:03:21.474 11:00:26 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:03:21.474 11:00:26 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 01:03:21.474 11:00:26 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:03:21.474 Malloc0 01:03:21.474 11:00:26 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:03:21.731 Malloc1 01:03:21.731 11:00:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:03:21.731 11:00:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:03:21.731 11:00:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 01:03:21.731 11:00:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 01:03:21.731 11:00:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:03:21.732 11:00:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 01:03:21.732 11:00:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:03:21.732 11:00:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:03:21.732 11:00:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 01:03:21.732 11:00:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 01:03:21.732 11:00:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:03:21.732 11:00:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 01:03:21.732 11:00:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 01:03:21.732 11:00:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:03:21.732 11:00:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:03:21.732 11:00:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 01:03:21.989 /dev/nbd0 01:03:21.989 11:00:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:03:21.989 11:00:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:03:21.989 11:00:26 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 01:03:21.989 11:00:26 event.app_repeat -- common/autotest_common.sh@867 -- # local i 01:03:21.989 11:00:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 01:03:21.989 11:00:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 01:03:21.989 11:00:26 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 01:03:21.989 11:00:26 event.app_repeat -- common/autotest_common.sh@871 -- # break 01:03:21.989 11:00:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 01:03:21.989 11:00:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 01:03:21.989 11:00:26 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:03:21.989 1+0 records in 01:03:21.989 1+0 records out 01:03:21.989 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356277 s, 11.5 MB/s 01:03:21.989 11:00:26 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:03:21.989 11:00:26 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 01:03:21.989 11:00:26 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:03:21.989 11:00:26 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 01:03:21.989 11:00:26 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 01:03:21.989 11:00:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:03:21.989 11:00:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:03:21.989 11:00:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 01:03:21.989 /dev/nbd1 01:03:21.989 11:00:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:03:21.989 11:00:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:03:21.989 11:00:27 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 01:03:21.989 11:00:27 event.app_repeat -- common/autotest_common.sh@867 -- # local i 01:03:21.989 11:00:27 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 01:03:21.989 11:00:27 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 01:03:21.989 11:00:27 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 01:03:22.247 11:00:27 event.app_repeat -- common/autotest_common.sh@871 -- # break 01:03:22.247 11:00:27 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 01:03:22.247 11:00:27 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 01:03:22.247 11:00:27 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:03:22.247 1+0 records in 01:03:22.247 1+0 records out 01:03:22.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222659 s, 18.4 MB/s 01:03:22.247 11:00:27 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:03:22.247 11:00:27 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 01:03:22.247 11:00:27 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:03:22.247 11:00:27 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 01:03:22.247 11:00:27 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 01:03:22.247 11:00:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:03:22.247 11:00:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:03:22.247 11:00:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:03:22.247 11:00:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:03:22.247 11:00:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:03:22.247 11:00:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 01:03:22.247 { 01:03:22.247 "nbd_device": "/dev/nbd0", 01:03:22.247 "bdev_name": "Malloc0" 01:03:22.247 }, 01:03:22.247 { 01:03:22.247 "nbd_device": "/dev/nbd1", 01:03:22.247 "bdev_name": "Malloc1" 01:03:22.247 } 01:03:22.247 ]' 01:03:22.247 11:00:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:03:22.247 11:00:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 01:03:22.247 { 01:03:22.247 "nbd_device": "/dev/nbd0", 01:03:22.247 "bdev_name": "Malloc0" 01:03:22.247 }, 01:03:22.247 { 01:03:22.247 "nbd_device": "/dev/nbd1", 01:03:22.247 "bdev_name": "Malloc1" 01:03:22.247 } 01:03:22.247 ]' 01:03:22.247 11:00:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 01:03:22.247 /dev/nbd1' 01:03:22.247 11:00:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 01:03:22.247 /dev/nbd1' 01:03:22.247 11:00:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:03:22.247 11:00:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 01:03:22.247 11:00:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 01:03:22.247 11:00:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 01:03:22.247 11:00:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 01:03:22.247 11:00:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 01:03:22.247 11:00:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:03:22.247 11:00:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:03:22.247 11:00:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 01:03:22.247 11:00:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:03:22.247 11:00:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 01:03:22.247 11:00:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 01:03:22.505 256+0 records in 01:03:22.505 256+0 records out 01:03:22.505 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00547244 s, 192 MB/s 01:03:22.505 11:00:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:03:22.505 11:00:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 01:03:22.505 256+0 records in 01:03:22.505 256+0 records out 01:03:22.505 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273168 s, 38.4 MB/s 01:03:22.505 11:00:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:03:22.505 11:00:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 01:03:22.505 256+0 records in 01:03:22.505 256+0 records out 01:03:22.505 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0311335 s, 33.7 MB/s 01:03:22.505 11:00:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 01:03:22.505 11:00:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:03:22.505 11:00:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:03:22.505 11:00:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 01:03:22.505 11:00:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:03:22.505 11:00:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 01:03:22.505 11:00:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 01:03:22.505 11:00:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:03:22.505 11:00:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 01:03:22.505 11:00:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:03:22.505 11:00:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 01:03:22.505 11:00:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:03:22.505 11:00:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 01:03:22.505 11:00:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:03:22.505 11:00:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:03:22.505 11:00:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 01:03:22.505 11:00:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 01:03:22.505 11:00:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:03:22.505 11:00:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:03:22.778 11:00:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:03:22.778 11:00:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:03:22.778 11:00:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:03:22.778 11:00:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:03:22.778 11:00:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:03:22.778 11:00:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:03:22.778 11:00:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:03:22.778 11:00:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:03:22.778 11:00:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:03:22.778 11:00:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:03:22.778 11:00:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:03:22.778 11:00:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:03:22.778 11:00:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:03:22.778 11:00:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:03:22.778 11:00:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:03:22.778 11:00:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:03:22.778 11:00:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:03:22.778 11:00:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:03:22.778 11:00:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:03:22.778 11:00:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:03:22.778 11:00:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:03:23.086 11:00:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:03:23.086 11:00:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 01:03:23.086 11:00:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:03:23.086 11:00:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:03:23.086 11:00:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 01:03:23.086 11:00:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:03:23.086 11:00:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 01:03:23.086 11:00:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 01:03:23.086 11:00:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 01:03:23.086 11:00:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 01:03:23.086 11:00:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 01:03:23.086 11:00:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 01:03:23.086 11:00:28 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 01:03:23.343 11:00:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 01:03:23.601 [2024-07-22 11:00:28.612791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 01:03:23.601 [2024-07-22 11:00:28.654190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:03:23.601 [2024-07-22 11:00:28.654195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:03:23.601 [2024-07-22 11:00:28.695745] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:03:23.601 [2024-07-22 11:00:28.695817] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 01:03:23.601 [2024-07-22 11:00:28.695827] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 01:03:26.880 11:00:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 01:03:26.880 spdk_app_start Round 1 01:03:26.880 11:00:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 01:03:26.880 11:00:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72689 /var/tmp/spdk-nbd.sock 01:03:26.880 11:00:31 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 72689 ']' 01:03:26.880 11:00:31 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:03:26.880 11:00:31 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 01:03:26.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:03:26.880 11:00:31 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:03:26.880 11:00:31 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 01:03:26.880 11:00:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:03:26.880 11:00:31 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:03:26.880 11:00:31 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 01:03:26.880 11:00:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:03:26.880 Malloc0 01:03:26.880 11:00:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:03:26.880 Malloc1 01:03:26.880 11:00:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:03:26.880 11:00:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:03:26.880 11:00:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 01:03:26.880 11:00:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 01:03:26.880 11:00:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:03:26.880 11:00:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 01:03:26.880 11:00:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:03:26.880 11:00:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:03:26.881 11:00:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 01:03:26.881 11:00:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 01:03:26.881 11:00:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:03:26.881 11:00:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 01:03:26.881 11:00:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 01:03:26.881 11:00:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:03:26.881 11:00:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:03:26.881 11:00:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 01:03:27.140 /dev/nbd0 01:03:27.140 11:00:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:03:27.140 11:00:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:03:27.140 11:00:32 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 01:03:27.140 11:00:32 event.app_repeat -- common/autotest_common.sh@867 -- # local i 01:03:27.140 11:00:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 01:03:27.140 11:00:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 01:03:27.140 11:00:32 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 01:03:27.140 11:00:32 event.app_repeat -- common/autotest_common.sh@871 -- # break 01:03:27.140 11:00:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 01:03:27.140 11:00:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 01:03:27.140 11:00:32 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:03:27.140 1+0 records in 01:03:27.140 1+0 records out 01:03:27.140 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002881 s, 14.2 MB/s 01:03:27.140 11:00:32 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:03:27.140 11:00:32 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 01:03:27.140 11:00:32 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:03:27.140 11:00:32 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 01:03:27.140 11:00:32 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 01:03:27.140 11:00:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:03:27.140 11:00:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:03:27.140 11:00:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 01:03:27.398 /dev/nbd1 01:03:27.398 11:00:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:03:27.398 11:00:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:03:27.398 11:00:32 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 01:03:27.398 11:00:32 event.app_repeat -- common/autotest_common.sh@867 -- # local i 01:03:27.398 11:00:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 01:03:27.398 11:00:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 01:03:27.398 11:00:32 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 01:03:27.398 11:00:32 event.app_repeat -- common/autotest_common.sh@871 -- # break 01:03:27.398 11:00:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 01:03:27.398 11:00:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 01:03:27.398 11:00:32 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:03:27.398 1+0 records in 01:03:27.398 1+0 records out 01:03:27.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328905 s, 12.5 MB/s 01:03:27.398 11:00:32 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:03:27.398 11:00:32 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 01:03:27.398 11:00:32 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:03:27.398 11:00:32 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 01:03:27.398 11:00:32 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 01:03:27.398 11:00:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:03:27.398 11:00:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:03:27.398 11:00:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:03:27.398 11:00:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:03:27.398 11:00:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 01:03:27.656 { 01:03:27.656 "nbd_device": "/dev/nbd0", 01:03:27.656 "bdev_name": "Malloc0" 01:03:27.656 }, 01:03:27.656 { 01:03:27.656 "nbd_device": "/dev/nbd1", 01:03:27.656 "bdev_name": "Malloc1" 01:03:27.656 } 01:03:27.656 ]' 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 01:03:27.656 { 01:03:27.656 "nbd_device": "/dev/nbd0", 01:03:27.656 "bdev_name": "Malloc0" 01:03:27.656 }, 01:03:27.656 { 01:03:27.656 "nbd_device": "/dev/nbd1", 01:03:27.656 "bdev_name": "Malloc1" 01:03:27.656 } 01:03:27.656 ]' 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 01:03:27.656 /dev/nbd1' 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 01:03:27.656 /dev/nbd1' 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 01:03:27.656 256+0 records in 01:03:27.656 256+0 records out 01:03:27.656 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102912 s, 102 MB/s 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 01:03:27.656 256+0 records in 01:03:27.656 256+0 records out 01:03:27.656 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247561 s, 42.4 MB/s 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 01:03:27.656 256+0 records in 01:03:27.656 256+0 records out 01:03:27.656 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0297342 s, 35.3 MB/s 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:03:27.656 11:00:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:03:27.914 11:00:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:03:27.914 11:00:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:03:27.914 11:00:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:03:27.914 11:00:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:03:27.914 11:00:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:03:27.914 11:00:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:03:27.914 11:00:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:03:27.914 11:00:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:03:27.914 11:00:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:03:27.914 11:00:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:03:28.172 11:00:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:03:28.172 11:00:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:03:28.172 11:00:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:03:28.172 11:00:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:03:28.172 11:00:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:03:28.172 11:00:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:03:28.172 11:00:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:03:28.172 11:00:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:03:28.172 11:00:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:03:28.172 11:00:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:03:28.172 11:00:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:03:28.429 11:00:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:03:28.429 11:00:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:03:28.429 11:00:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 01:03:28.429 11:00:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:03:28.429 11:00:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 01:03:28.429 11:00:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:03:28.429 11:00:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 01:03:28.429 11:00:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 01:03:28.429 11:00:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 01:03:28.429 11:00:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 01:03:28.429 11:00:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 01:03:28.429 11:00:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 01:03:28.429 11:00:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 01:03:28.686 11:00:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 01:03:28.686 [2024-07-22 11:00:33.871221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 01:03:28.943 [2024-07-22 11:00:33.911783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:03:28.943 [2024-07-22 11:00:33.911785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:03:28.943 [2024-07-22 11:00:33.956108] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:03:28.943 [2024-07-22 11:00:33.956179] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 01:03:28.943 [2024-07-22 11:00:33.956191] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 01:03:32.249 spdk_app_start Round 2 01:03:32.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:03:32.249 11:00:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 01:03:32.249 11:00:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 01:03:32.249 11:00:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72689 /var/tmp/spdk-nbd.sock 01:03:32.249 11:00:36 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 72689 ']' 01:03:32.249 11:00:36 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:03:32.249 11:00:36 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 01:03:32.249 11:00:36 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:03:32.249 11:00:36 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 01:03:32.249 11:00:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:03:32.249 11:00:36 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:03:32.249 11:00:36 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 01:03:32.249 11:00:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:03:32.249 Malloc0 01:03:32.250 11:00:37 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:03:32.250 Malloc1 01:03:32.505 11:00:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:03:32.505 11:00:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:03:32.505 11:00:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 01:03:32.505 11:00:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 01:03:32.505 11:00:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:03:32.505 11:00:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 01:03:32.505 11:00:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:03:32.505 11:00:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:03:32.505 11:00:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 01:03:32.505 11:00:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 01:03:32.505 11:00:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:03:32.505 11:00:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 01:03:32.506 11:00:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 01:03:32.506 11:00:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:03:32.506 11:00:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:03:32.506 11:00:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 01:03:32.506 /dev/nbd0 01:03:32.762 11:00:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:03:32.762 11:00:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:03:32.762 11:00:37 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 01:03:32.762 11:00:37 event.app_repeat -- common/autotest_common.sh@867 -- # local i 01:03:32.762 11:00:37 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 01:03:32.762 11:00:37 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 01:03:32.762 11:00:37 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 01:03:32.762 11:00:37 event.app_repeat -- common/autotest_common.sh@871 -- # break 01:03:32.762 11:00:37 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 01:03:32.762 11:00:37 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 01:03:32.762 11:00:37 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:03:32.762 1+0 records in 01:03:32.762 1+0 records out 01:03:32.762 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347201 s, 11.8 MB/s 01:03:32.762 11:00:37 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:03:32.762 11:00:37 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 01:03:32.762 11:00:37 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:03:32.762 11:00:37 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 01:03:32.762 11:00:37 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 01:03:32.762 11:00:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:03:32.762 11:00:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:03:32.762 11:00:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 01:03:33.019 /dev/nbd1 01:03:33.019 11:00:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:03:33.019 11:00:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:03:33.019 11:00:38 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 01:03:33.019 11:00:38 event.app_repeat -- common/autotest_common.sh@867 -- # local i 01:03:33.019 11:00:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 01:03:33.019 11:00:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 01:03:33.019 11:00:38 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 01:03:33.019 11:00:38 event.app_repeat -- common/autotest_common.sh@871 -- # break 01:03:33.019 11:00:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 01:03:33.019 11:00:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 01:03:33.019 11:00:38 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:03:33.019 1+0 records in 01:03:33.019 1+0 records out 01:03:33.019 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000446532 s, 9.2 MB/s 01:03:33.019 11:00:38 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:03:33.019 11:00:38 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 01:03:33.019 11:00:38 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:03:33.019 11:00:38 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 01:03:33.019 11:00:38 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 01:03:33.019 11:00:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:03:33.019 11:00:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:03:33.019 11:00:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:03:33.019 11:00:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:03:33.019 11:00:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:03:33.276 11:00:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 01:03:33.276 { 01:03:33.276 "nbd_device": "/dev/nbd0", 01:03:33.276 "bdev_name": "Malloc0" 01:03:33.276 }, 01:03:33.276 { 01:03:33.276 "nbd_device": "/dev/nbd1", 01:03:33.276 "bdev_name": "Malloc1" 01:03:33.276 } 01:03:33.276 ]' 01:03:33.276 11:00:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:03:33.276 11:00:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 01:03:33.276 { 01:03:33.276 "nbd_device": "/dev/nbd0", 01:03:33.276 "bdev_name": "Malloc0" 01:03:33.276 }, 01:03:33.276 { 01:03:33.276 "nbd_device": "/dev/nbd1", 01:03:33.276 "bdev_name": "Malloc1" 01:03:33.276 } 01:03:33.276 ]' 01:03:33.276 11:00:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 01:03:33.276 /dev/nbd1' 01:03:33.276 11:00:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 01:03:33.276 /dev/nbd1' 01:03:33.276 11:00:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:03:33.276 11:00:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 01:03:33.276 11:00:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 01:03:33.276 11:00:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 01:03:33.276 11:00:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 01:03:33.276 11:00:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 01:03:33.276 11:00:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:03:33.276 11:00:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:03:33.276 11:00:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 01:03:33.276 11:00:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:03:33.276 11:00:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 01:03:33.276 11:00:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 01:03:33.276 256+0 records in 01:03:33.276 256+0 records out 01:03:33.276 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123581 s, 84.8 MB/s 01:03:33.276 11:00:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:03:33.276 11:00:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 01:03:33.276 256+0 records in 01:03:33.276 256+0 records out 01:03:33.276 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273926 s, 38.3 MB/s 01:03:33.276 11:00:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:03:33.276 11:00:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 01:03:33.276 256+0 records in 01:03:33.276 256+0 records out 01:03:33.276 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0333183 s, 31.5 MB/s 01:03:33.276 11:00:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 01:03:33.276 11:00:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:03:33.276 11:00:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:03:33.276 11:00:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 01:03:33.276 11:00:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:03:33.276 11:00:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 01:03:33.276 11:00:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 01:03:33.276 11:00:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:03:33.276 11:00:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 01:03:33.276 11:00:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:03:33.276 11:00:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 01:03:33.533 11:00:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:03:33.533 11:00:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 01:03:33.533 11:00:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:03:33.533 11:00:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:03:33.533 11:00:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 01:03:33.533 11:00:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 01:03:33.533 11:00:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:03:33.533 11:00:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:03:33.533 11:00:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:03:33.533 11:00:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:03:33.533 11:00:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:03:33.533 11:00:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:03:33.533 11:00:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:03:33.533 11:00:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:03:33.533 11:00:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:03:33.533 11:00:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:03:33.533 11:00:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:03:33.533 11:00:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:03:34.041 11:00:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:03:34.041 11:00:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:03:34.041 11:00:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:03:34.041 11:00:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:03:34.041 11:00:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:03:34.041 11:00:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:03:34.041 11:00:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:03:34.041 11:00:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:03:34.041 11:00:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:03:34.041 11:00:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:03:34.041 11:00:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:03:34.299 11:00:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:03:34.299 11:00:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 01:03:34.299 11:00:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:03:34.299 11:00:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:03:34.299 11:00:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 01:03:34.299 11:00:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:03:34.299 11:00:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 01:03:34.299 11:00:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 01:03:34.299 11:00:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 01:03:34.299 11:00:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 01:03:34.299 11:00:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 01:03:34.299 11:00:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 01:03:34.299 11:00:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 01:03:34.557 11:00:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 01:03:34.814 [2024-07-22 11:00:39.813642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 01:03:34.814 [2024-07-22 11:00:39.878877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:03:34.814 [2024-07-22 11:00:39.878897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:03:34.814 [2024-07-22 11:00:39.954112] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:03:34.814 [2024-07-22 11:00:39.954218] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 01:03:34.814 [2024-07-22 11:00:39.954233] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 01:03:37.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:03:37.336 11:00:42 event.app_repeat -- event/event.sh@38 -- # waitforlisten 72689 /var/tmp/spdk-nbd.sock 01:03:37.336 11:00:42 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 72689 ']' 01:03:37.336 11:00:42 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:03:37.336 11:00:42 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 01:03:37.336 11:00:42 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:03:37.336 11:00:42 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 01:03:37.336 11:00:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:03:37.594 11:00:42 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:03:37.594 11:00:42 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 01:03:37.594 11:00:42 event.app_repeat -- event/event.sh@39 -- # killprocess 72689 01:03:37.594 11:00:42 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 72689 ']' 01:03:37.594 11:00:42 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 72689 01:03:37.594 11:00:42 event.app_repeat -- common/autotest_common.sh@953 -- # uname 01:03:37.594 11:00:42 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:03:37.594 11:00:42 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72689 01:03:37.594 killing process with pid 72689 01:03:37.594 11:00:42 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:03:37.594 11:00:42 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:03:37.594 11:00:42 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72689' 01:03:37.594 11:00:42 event.app_repeat -- common/autotest_common.sh@967 -- # kill 72689 01:03:37.594 11:00:42 event.app_repeat -- common/autotest_common.sh@972 -- # wait 72689 01:03:37.850 spdk_app_start is called in Round 0. 01:03:37.850 Shutdown signal received, stop current app iteration 01:03:37.850 Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 reinitialization... 01:03:37.850 spdk_app_start is called in Round 1. 01:03:37.850 Shutdown signal received, stop current app iteration 01:03:37.850 Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 reinitialization... 01:03:37.850 spdk_app_start is called in Round 2. 01:03:37.850 Shutdown signal received, stop current app iteration 01:03:37.850 Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 reinitialization... 01:03:37.850 spdk_app_start is called in Round 3. 01:03:37.850 Shutdown signal received, stop current app iteration 01:03:38.108 11:00:43 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 01:03:38.108 11:00:43 event.app_repeat -- event/event.sh@42 -- # return 0 01:03:38.108 01:03:38.108 real 0m17.625s 01:03:38.108 user 0m38.364s 01:03:38.108 sys 0m3.060s 01:03:38.108 11:00:43 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 01:03:38.108 ************************************ 01:03:38.108 END TEST app_repeat 01:03:38.108 ************************************ 01:03:38.108 11:00:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:03:38.108 11:00:43 event -- common/autotest_common.sh@1142 -- # return 0 01:03:38.108 11:00:43 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 01:03:38.108 11:00:43 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 01:03:38.108 11:00:43 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:03:38.108 11:00:43 event -- common/autotest_common.sh@1105 -- # xtrace_disable 01:03:38.108 11:00:43 event -- common/autotest_common.sh@10 -- # set +x 01:03:38.108 ************************************ 01:03:38.108 START TEST cpu_locks 01:03:38.108 ************************************ 01:03:38.108 11:00:43 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 01:03:38.108 * Looking for test storage... 01:03:38.108 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 01:03:38.108 11:00:43 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 01:03:38.108 11:00:43 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 01:03:38.108 11:00:43 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 01:03:38.108 11:00:43 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 01:03:38.108 11:00:43 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:03:38.108 11:00:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 01:03:38.108 11:00:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:03:38.108 ************************************ 01:03:38.108 START TEST default_locks 01:03:38.108 ************************************ 01:03:38.108 11:00:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 01:03:38.108 11:00:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=73114 01:03:38.108 11:00:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:03:38.108 11:00:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 73114 01:03:38.108 11:00:43 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 73114 ']' 01:03:38.108 11:00:43 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:38.108 11:00:43 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 01:03:38.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:38.108 11:00:43 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:38.108 11:00:43 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 01:03:38.108 11:00:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 01:03:38.365 [2024-07-22 11:00:43.363580] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:03:38.365 [2024-07-22 11:00:43.363673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73114 ] 01:03:38.365 [2024-07-22 11:00:43.506018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:38.623 [2024-07-22 11:00:43.581022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:03:38.623 [2024-07-22 11:00:43.655149] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:03:39.187 11:00:44 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:03:39.187 11:00:44 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 01:03:39.187 11:00:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 73114 01:03:39.187 11:00:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 73114 01:03:39.187 11:00:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:03:39.444 11:00:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 73114 01:03:39.444 11:00:44 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 73114 ']' 01:03:39.444 11:00:44 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 73114 01:03:39.444 11:00:44 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 01:03:39.444 11:00:44 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:03:39.444 11:00:44 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73114 01:03:39.701 11:00:44 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:03:39.701 11:00:44 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:03:39.701 killing process with pid 73114 01:03:39.701 11:00:44 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73114' 01:03:39.701 11:00:44 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 73114 01:03:39.701 11:00:44 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 73114 01:03:40.266 11:00:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 73114 01:03:40.266 11:00:45 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 01:03:40.266 11:00:45 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 73114 01:03:40.266 11:00:45 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 01:03:40.266 11:00:45 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:03:40.266 11:00:45 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 01:03:40.266 11:00:45 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:03:40.266 11:00:45 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 73114 01:03:40.266 11:00:45 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 73114 ']' 01:03:40.266 11:00:45 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:40.266 11:00:45 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 01:03:40.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:40.266 11:00:45 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:40.266 11:00:45 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 01:03:40.266 11:00:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 01:03:40.266 ERROR: process (pid: 73114) is no longer running 01:03:40.266 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (73114) - No such process 01:03:40.266 11:00:45 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:03:40.266 11:00:45 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 01:03:40.266 11:00:45 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 01:03:40.266 11:00:45 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:03:40.267 11:00:45 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:03:40.267 11:00:45 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:03:40.267 11:00:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 01:03:40.267 11:00:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 01:03:40.267 11:00:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 01:03:40.267 11:00:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 01:03:40.267 01:03:40.267 real 0m1.943s 01:03:40.267 user 0m1.863s 01:03:40.267 sys 0m0.673s 01:03:40.267 11:00:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 01:03:40.267 ************************************ 01:03:40.267 END TEST default_locks 01:03:40.267 11:00:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 01:03:40.267 ************************************ 01:03:40.267 11:00:45 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 01:03:40.267 11:00:45 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 01:03:40.267 11:00:45 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:03:40.267 11:00:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 01:03:40.267 11:00:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:03:40.267 ************************************ 01:03:40.267 START TEST default_locks_via_rpc 01:03:40.267 ************************************ 01:03:40.267 11:00:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 01:03:40.267 11:00:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=73166 01:03:40.267 11:00:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:03:40.267 11:00:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 73166 01:03:40.267 11:00:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 73166 ']' 01:03:40.267 11:00:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:40.267 11:00:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 01:03:40.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:40.267 11:00:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:40.267 11:00:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 01:03:40.267 11:00:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:03:40.267 [2024-07-22 11:00:45.378091] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:03:40.267 [2024-07-22 11:00:45.378168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73166 ] 01:03:40.575 [2024-07-22 11:00:45.520770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:40.575 [2024-07-22 11:00:45.595188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:03:40.575 [2024-07-22 11:00:45.669114] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:03:41.154 11:00:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:03:41.154 11:00:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 01:03:41.154 11:00:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 01:03:41.154 11:00:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:41.154 11:00:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:03:41.154 11:00:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:41.154 11:00:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 01:03:41.154 11:00:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 01:03:41.154 11:00:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 01:03:41.154 11:00:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 01:03:41.154 11:00:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 01:03:41.154 11:00:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:41.154 11:00:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:03:41.154 11:00:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:41.154 11:00:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 73166 01:03:41.154 11:00:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 73166 01:03:41.154 11:00:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:03:41.717 11:00:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 73166 01:03:41.717 11:00:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 73166 ']' 01:03:41.717 11:00:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 73166 01:03:41.717 11:00:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 01:03:41.717 11:00:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:03:41.717 11:00:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73166 01:03:41.717 11:00:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:03:41.717 11:00:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:03:41.717 killing process with pid 73166 01:03:41.717 11:00:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73166' 01:03:41.717 11:00:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 73166 01:03:41.717 11:00:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 73166 01:03:41.974 01:03:41.974 real 0m1.807s 01:03:41.974 user 0m1.786s 01:03:41.975 sys 0m0.676s 01:03:41.975 11:00:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 01:03:41.975 11:00:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:03:41.975 ************************************ 01:03:41.975 END TEST default_locks_via_rpc 01:03:41.975 ************************************ 01:03:42.231 11:00:47 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 01:03:42.231 11:00:47 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 01:03:42.231 11:00:47 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:03:42.231 11:00:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 01:03:42.231 11:00:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:03:42.231 ************************************ 01:03:42.231 START TEST non_locking_app_on_locked_coremask 01:03:42.231 ************************************ 01:03:42.231 11:00:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 01:03:42.231 11:00:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=73211 01:03:42.231 11:00:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 73211 /var/tmp/spdk.sock 01:03:42.231 11:00:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:03:42.231 11:00:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 73211 ']' 01:03:42.231 11:00:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:42.231 11:00:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 01:03:42.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:42.231 11:00:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:42.231 11:00:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 01:03:42.231 11:00:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:03:42.231 [2024-07-22 11:00:47.259634] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:03:42.231 [2024-07-22 11:00:47.259707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73211 ] 01:03:42.231 [2024-07-22 11:00:47.402134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:42.488 [2024-07-22 11:00:47.450436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:03:42.488 [2024-07-22 11:00:47.491927] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:03:43.054 11:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:03:43.054 11:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 01:03:43.054 11:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=73227 01:03:43.054 11:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 73227 /var/tmp/spdk2.sock 01:03:43.054 11:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 01:03:43.054 11:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 73227 ']' 01:03:43.054 11:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 01:03:43.054 11:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 01:03:43.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:03:43.054 11:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:03:43.054 11:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 01:03:43.054 11:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:03:43.054 [2024-07-22 11:00:48.155796] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:03:43.054 [2024-07-22 11:00:48.155886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73227 ] 01:03:43.313 [2024-07-22 11:00:48.291531] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 01:03:43.313 [2024-07-22 11:00:48.291581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:43.313 [2024-07-22 11:00:48.387401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:03:43.313 [2024-07-22 11:00:48.468692] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:03:43.878 11:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:03:43.878 11:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 01:03:43.878 11:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 73211 01:03:43.878 11:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 73211 01:03:43.878 11:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:03:44.809 11:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 73211 01:03:44.809 11:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 73211 ']' 01:03:44.809 11:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 73211 01:03:44.809 11:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 01:03:44.809 11:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:03:44.809 11:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73211 01:03:44.809 11:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:03:44.809 11:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:03:44.809 11:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73211' 01:03:44.809 killing process with pid 73211 01:03:44.809 11:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 73211 01:03:44.809 11:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 73211 01:03:45.373 11:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 73227 01:03:45.373 11:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 73227 ']' 01:03:45.373 11:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 73227 01:03:45.373 11:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 01:03:45.373 11:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:03:45.373 11:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73227 01:03:45.373 11:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:03:45.374 killing process with pid 73227 01:03:45.374 11:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:03:45.374 11:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73227' 01:03:45.374 11:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 73227 01:03:45.374 11:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 73227 01:03:45.630 01:03:45.631 real 0m3.557s 01:03:45.631 user 0m3.889s 01:03:45.631 sys 0m0.989s 01:03:45.631 11:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 01:03:45.631 11:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:03:45.631 ************************************ 01:03:45.631 END TEST non_locking_app_on_locked_coremask 01:03:45.631 ************************************ 01:03:45.631 11:00:50 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 01:03:45.631 11:00:50 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 01:03:45.631 11:00:50 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:03:45.631 11:00:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 01:03:45.631 11:00:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:03:45.631 ************************************ 01:03:45.631 START TEST locking_app_on_unlocked_coremask 01:03:45.631 ************************************ 01:03:45.631 11:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 01:03:45.631 11:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=73289 01:03:45.631 11:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 73289 /var/tmp/spdk.sock 01:03:45.631 11:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 01:03:45.631 11:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 73289 ']' 01:03:45.631 11:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:45.631 11:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 01:03:45.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:45.631 11:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:45.631 11:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 01:03:45.631 11:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 01:03:45.888 [2024-07-22 11:00:50.886271] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:03:45.888 [2024-07-22 11:00:50.886345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73289 ] 01:03:45.888 [2024-07-22 11:00:51.015424] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 01:03:45.888 [2024-07-22 11:00:51.015478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:45.888 [2024-07-22 11:00:51.063729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:03:46.145 [2024-07-22 11:00:51.105993] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:03:46.723 11:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:03:46.723 11:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 01:03:46.723 11:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=73305 01:03:46.723 11:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 73305 /var/tmp/spdk2.sock 01:03:46.723 11:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 01:03:46.723 11:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 73305 ']' 01:03:46.723 11:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 01:03:46.723 11:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 01:03:46.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:03:46.723 11:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:03:46.723 11:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 01:03:46.723 11:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 01:03:46.723 [2024-07-22 11:00:51.811489] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:03:46.723 [2024-07-22 11:00:51.811931] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73305 ] 01:03:46.981 [2024-07-22 11:00:51.949763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:46.981 [2024-07-22 11:00:52.045930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:03:46.981 [2024-07-22 11:00:52.127617] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:03:47.545 11:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:03:47.545 11:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 01:03:47.545 11:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 73305 01:03:47.545 11:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 73305 01:03:47.545 11:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:03:48.479 11:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 73289 01:03:48.479 11:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 73289 ']' 01:03:48.479 11:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 73289 01:03:48.479 11:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 01:03:48.479 11:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:03:48.479 11:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73289 01:03:48.479 11:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:03:48.479 11:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:03:48.479 11:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73289' 01:03:48.479 killing process with pid 73289 01:03:48.479 11:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 73289 01:03:48.479 11:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 73289 01:03:49.042 11:00:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 73305 01:03:49.042 11:00:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 73305 ']' 01:03:49.042 11:00:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 73305 01:03:49.042 11:00:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 01:03:49.042 11:00:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:03:49.042 11:00:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73305 01:03:49.042 killing process with pid 73305 01:03:49.042 11:00:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:03:49.042 11:00:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:03:49.042 11:00:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73305' 01:03:49.042 11:00:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 73305 01:03:49.042 11:00:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 73305 01:03:49.325 01:03:49.325 real 0m3.660s 01:03:49.325 user 0m4.021s 01:03:49.325 sys 0m1.039s 01:03:49.325 11:00:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 01:03:49.325 11:00:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 01:03:49.325 ************************************ 01:03:49.325 END TEST locking_app_on_unlocked_coremask 01:03:49.325 ************************************ 01:03:49.581 11:00:54 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 01:03:49.581 11:00:54 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 01:03:49.581 11:00:54 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:03:49.581 11:00:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 01:03:49.581 11:00:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:03:49.581 ************************************ 01:03:49.581 START TEST locking_app_on_locked_coremask 01:03:49.581 ************************************ 01:03:49.581 11:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 01:03:49.581 11:00:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=73372 01:03:49.581 11:00:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 73372 /var/tmp/spdk.sock 01:03:49.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:49.581 11:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 73372 ']' 01:03:49.581 11:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:49.581 11:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 01:03:49.581 11:00:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:03:49.581 11:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:49.581 11:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 01:03:49.581 11:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:03:49.581 [2024-07-22 11:00:54.618926] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:03:49.581 [2024-07-22 11:00:54.619004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73372 ] 01:03:49.581 [2024-07-22 11:00:54.760548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:49.838 [2024-07-22 11:00:54.809254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:03:49.838 [2024-07-22 11:00:54.851037] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:03:50.402 11:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:03:50.402 11:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 01:03:50.402 11:00:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 01:03:50.402 11:00:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=73382 01:03:50.402 11:00:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 73382 /var/tmp/spdk2.sock 01:03:50.402 11:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 01:03:50.402 11:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 73382 /var/tmp/spdk2.sock 01:03:50.402 11:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 01:03:50.402 11:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:03:50.402 11:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 01:03:50.402 11:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:03:50.402 11:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 73382 /var/tmp/spdk2.sock 01:03:50.402 11:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 73382 ']' 01:03:50.402 11:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 01:03:50.402 11:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 01:03:50.402 11:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:03:50.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:03:50.402 11:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 01:03:50.402 11:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:03:50.402 [2024-07-22 11:00:55.497615] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:03:50.402 [2024-07-22 11:00:55.497919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73382 ] 01:03:50.659 [2024-07-22 11:00:55.637452] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 73372 has claimed it. 01:03:50.659 [2024-07-22 11:00:55.637516] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 01:03:51.224 ERROR: process (pid: 73382) is no longer running 01:03:51.224 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (73382) - No such process 01:03:51.224 11:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:03:51.224 11:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 01:03:51.224 11:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 01:03:51.224 11:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:03:51.224 11:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:03:51.224 11:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:03:51.224 11:00:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 73372 01:03:51.224 11:00:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 73372 01:03:51.224 11:00:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:03:51.490 11:00:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 73372 01:03:51.490 11:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 73372 ']' 01:03:51.490 11:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 73372 01:03:51.490 11:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 01:03:51.748 11:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:03:51.748 11:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73372 01:03:51.748 11:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:03:51.748 killing process with pid 73372 01:03:51.748 11:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:03:51.748 11:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73372' 01:03:51.748 11:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 73372 01:03:51.748 11:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 73372 01:03:52.006 01:03:52.006 real 0m2.486s 01:03:52.006 user 0m2.759s 01:03:52.006 sys 0m0.620s 01:03:52.006 11:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 01:03:52.006 11:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:03:52.006 ************************************ 01:03:52.006 END TEST locking_app_on_locked_coremask 01:03:52.006 ************************************ 01:03:52.006 11:00:57 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 01:03:52.006 11:00:57 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 01:03:52.006 11:00:57 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:03:52.006 11:00:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 01:03:52.006 11:00:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:03:52.006 ************************************ 01:03:52.006 START TEST locking_overlapped_coremask 01:03:52.006 ************************************ 01:03:52.006 11:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 01:03:52.006 11:00:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=73428 01:03:52.006 11:00:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 01:03:52.006 11:00:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 73428 /var/tmp/spdk.sock 01:03:52.006 11:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 73428 ']' 01:03:52.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:52.006 11:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:52.006 11:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 01:03:52.006 11:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:52.006 11:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 01:03:52.006 11:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 01:03:52.006 [2024-07-22 11:00:57.180539] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:03:52.006 [2024-07-22 11:00:57.180612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73428 ] 01:03:52.264 [2024-07-22 11:00:57.323497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 01:03:52.264 [2024-07-22 11:00:57.373783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:03:52.264 [2024-07-22 11:00:57.373974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:03:52.264 [2024-07-22 11:00:57.373974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:03:52.264 [2024-07-22 11:00:57.418098] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:03:52.829 11:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:03:52.829 11:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 01:03:52.829 11:00:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=73448 01:03:52.829 11:00:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 73448 /var/tmp/spdk2.sock 01:03:52.829 11:00:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 01:03:52.829 11:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 01:03:52.829 11:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 73448 /var/tmp/spdk2.sock 01:03:52.829 11:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 01:03:52.829 11:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:03:52.829 11:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 01:03:52.829 11:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:03:52.829 11:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 73448 /var/tmp/spdk2.sock 01:03:52.829 11:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 73448 ']' 01:03:52.829 11:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 01:03:52.829 11:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 01:03:52.829 11:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:03:52.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:03:52.829 11:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 01:03:52.829 11:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 01:03:53.087 [2024-07-22 11:00:58.078349] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:03:53.087 [2024-07-22 11:00:58.078675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73448 ] 01:03:53.087 [2024-07-22 11:00:58.215875] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 73428 has claimed it. 01:03:53.087 [2024-07-22 11:00:58.216085] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 01:03:53.650 ERROR: process (pid: 73448) is no longer running 01:03:53.650 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (73448) - No such process 01:03:53.650 11:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:03:53.650 11:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 01:03:53.650 11:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 01:03:53.650 11:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:03:53.650 11:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:03:53.650 11:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:03:53.650 11:00:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 01:03:53.650 11:00:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 01:03:53.650 11:00:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 01:03:53.650 11:00:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 01:03:53.650 11:00:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 73428 01:03:53.650 11:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 73428 ']' 01:03:53.650 11:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 73428 01:03:53.650 11:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 01:03:53.650 11:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:03:53.650 11:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73428 01:03:53.650 11:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:03:53.650 11:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:03:53.650 11:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73428' 01:03:53.650 killing process with pid 73428 01:03:53.650 11:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 73428 01:03:53.650 11:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 73428 01:03:54.214 01:03:54.214 real 0m2.044s 01:03:54.214 user 0m5.531s 01:03:54.214 sys 0m0.414s 01:03:54.214 ************************************ 01:03:54.214 END TEST locking_overlapped_coremask 01:03:54.214 ************************************ 01:03:54.214 11:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 01:03:54.214 11:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 01:03:54.214 11:00:59 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 01:03:54.214 11:00:59 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 01:03:54.214 11:00:59 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:03:54.214 11:00:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 01:03:54.214 11:00:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:03:54.214 ************************************ 01:03:54.214 START TEST locking_overlapped_coremask_via_rpc 01:03:54.214 ************************************ 01:03:54.214 11:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 01:03:54.214 11:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=73488 01:03:54.214 11:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 01:03:54.214 11:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 73488 /var/tmp/spdk.sock 01:03:54.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:54.214 11:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 73488 ']' 01:03:54.214 11:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:54.214 11:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 01:03:54.214 11:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:54.214 11:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 01:03:54.214 11:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:03:54.214 [2024-07-22 11:00:59.297137] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:03:54.214 [2024-07-22 11:00:59.297212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73488 ] 01:03:54.471 [2024-07-22 11:00:59.425449] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 01:03:54.471 [2024-07-22 11:00:59.425515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 01:03:54.471 [2024-07-22 11:00:59.496740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:03:54.471 [2024-07-22 11:00:59.496919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:03:54.471 [2024-07-22 11:00:59.496914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:03:54.471 [2024-07-22 11:00:59.548497] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:03:55.035 11:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:03:55.035 11:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 01:03:55.035 11:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=73506 01:03:55.035 11:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 73506 /var/tmp/spdk2.sock 01:03:55.035 11:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 01:03:55.035 11:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 73506 ']' 01:03:55.035 11:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 01:03:55.035 11:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 01:03:55.035 11:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:03:55.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:03:55.035 11:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 01:03:55.035 11:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:03:55.035 [2024-07-22 11:01:00.200590] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:03:55.035 [2024-07-22 11:01:00.200888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73506 ] 01:03:55.290 [2024-07-22 11:01:00.335874] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 01:03:55.290 [2024-07-22 11:01:00.339861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 01:03:55.290 [2024-07-22 11:01:00.441707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:03:55.290 [2024-07-22 11:01:00.446079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 01:03:55.290 [2024-07-22 11:01:00.446089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:03:55.547 [2024-07-22 11:01:00.591781] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:03:56.110 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:03:56.110 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 01:03:56.110 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 01:03:56.110 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:56.110 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:03:56.110 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:03:56.110 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 01:03:56.110 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 01:03:56.110 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 01:03:56.110 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:03:56.110 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:03:56.110 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:03:56.110 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:03:56.110 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 01:03:56.110 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:03:56.110 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:03:56.110 [2024-07-22 11:01:01.095010] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 73488 has claimed it. 01:03:56.110 request: 01:03:56.110 { 01:03:56.110 "method": "framework_enable_cpumask_locks", 01:03:56.110 "req_id": 1 01:03:56.110 } 01:03:56.110 Got JSON-RPC error response 01:03:56.110 response: 01:03:56.110 { 01:03:56.110 "code": -32603, 01:03:56.110 "message": "Failed to claim CPU core: 2" 01:03:56.110 } 01:03:56.110 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:03:56.110 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 01:03:56.110 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:03:56.110 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:03:56.110 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:03:56.110 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 73488 /var/tmp/spdk.sock 01:03:56.110 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 73488 ']' 01:03:56.110 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:56.110 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 01:03:56.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:56.110 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:56.110 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 01:03:56.110 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:03:56.366 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:03:56.366 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 01:03:56.366 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 73506 /var/tmp/spdk2.sock 01:03:56.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:03:56.366 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 73506 ']' 01:03:56.366 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 01:03:56.366 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 01:03:56.366 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:03:56.366 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 01:03:56.366 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:03:56.366 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:03:56.366 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 01:03:56.366 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 01:03:56.366 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 01:03:56.366 ************************************ 01:03:56.366 END TEST locking_overlapped_coremask_via_rpc 01:03:56.366 ************************************ 01:03:56.366 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 01:03:56.366 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 01:03:56.366 01:03:56.366 real 0m2.284s 01:03:56.366 user 0m0.991s 01:03:56.366 sys 0m0.214s 01:03:56.366 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 01:03:56.366 11:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:03:56.622 11:01:01 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 01:03:56.622 11:01:01 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 01:03:56.622 11:01:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 73488 ]] 01:03:56.622 11:01:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 73488 01:03:56.622 11:01:01 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 73488 ']' 01:03:56.622 11:01:01 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 73488 01:03:56.622 11:01:01 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 01:03:56.622 11:01:01 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:03:56.622 11:01:01 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73488 01:03:56.622 killing process with pid 73488 01:03:56.622 11:01:01 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:03:56.622 11:01:01 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:03:56.623 11:01:01 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73488' 01:03:56.623 11:01:01 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 73488 01:03:56.623 11:01:01 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 73488 01:03:56.879 11:01:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 73506 ]] 01:03:56.879 11:01:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 73506 01:03:56.879 11:01:01 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 73506 ']' 01:03:56.879 11:01:01 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 73506 01:03:56.879 11:01:01 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 01:03:56.879 11:01:01 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:03:56.879 11:01:01 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73506 01:03:56.879 killing process with pid 73506 01:03:56.879 11:01:01 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:03:56.879 11:01:01 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:03:56.879 11:01:01 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73506' 01:03:56.879 11:01:01 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 73506 01:03:56.879 11:01:01 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 73506 01:03:57.443 11:01:02 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 01:03:57.443 Process with pid 73488 is not found 01:03:57.443 Process with pid 73506 is not found 01:03:57.443 11:01:02 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 01:03:57.443 11:01:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 73488 ]] 01:03:57.443 11:01:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 73488 01:03:57.443 11:01:02 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 73488 ']' 01:03:57.443 11:01:02 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 73488 01:03:57.443 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (73488) - No such process 01:03:57.443 11:01:02 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 73488 is not found' 01:03:57.443 11:01:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 73506 ]] 01:03:57.443 11:01:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 73506 01:03:57.443 11:01:02 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 73506 ']' 01:03:57.443 11:01:02 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 73506 01:03:57.443 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (73506) - No such process 01:03:57.443 11:01:02 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 73506 is not found' 01:03:57.443 11:01:02 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 01:03:57.443 01:03:57.443 real 0m19.405s 01:03:57.443 user 0m32.383s 01:03:57.443 sys 0m5.722s 01:03:57.443 11:01:02 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 01:03:57.443 11:01:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:03:57.443 ************************************ 01:03:57.443 END TEST cpu_locks 01:03:57.443 ************************************ 01:03:57.443 11:01:02 event -- common/autotest_common.sh@1142 -- # return 0 01:03:57.443 01:03:57.443 real 0m47.108s 01:03:57.443 user 1m29.200s 01:03:57.443 sys 0m9.735s 01:03:57.443 11:01:02 event -- common/autotest_common.sh@1124 -- # xtrace_disable 01:03:57.443 11:01:02 event -- common/autotest_common.sh@10 -- # set +x 01:03:57.443 ************************************ 01:03:57.443 END TEST event 01:03:57.443 ************************************ 01:03:57.706 11:01:02 -- common/autotest_common.sh@1142 -- # return 0 01:03:57.706 11:01:02 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 01:03:57.706 11:01:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:03:57.706 11:01:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:03:57.706 11:01:02 -- common/autotest_common.sh@10 -- # set +x 01:03:57.706 ************************************ 01:03:57.706 START TEST thread 01:03:57.706 ************************************ 01:03:57.706 11:01:02 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 01:03:57.706 * Looking for test storage... 01:03:57.706 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 01:03:57.706 11:01:02 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 01:03:57.706 11:01:02 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 01:03:57.706 11:01:02 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 01:03:57.706 11:01:02 thread -- common/autotest_common.sh@10 -- # set +x 01:03:57.706 ************************************ 01:03:57.707 START TEST thread_poller_perf 01:03:57.707 ************************************ 01:03:57.707 11:01:02 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 01:03:57.707 [2024-07-22 11:01:02.847458] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:03:57.707 [2024-07-22 11:01:02.847588] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73634 ] 01:03:58.017 [2024-07-22 11:01:02.992832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:58.017 [2024-07-22 11:01:03.041863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:03:58.017 Running 1000 pollers for 1 seconds with 1 microseconds period. 01:03:58.949 ====================================== 01:03:58.949 busy:2497930834 (cyc) 01:03:58.949 total_run_count: 388000 01:03:58.949 tsc_hz: 2490000000 (cyc) 01:03:58.949 ====================================== 01:03:58.949 poller_cost: 6437 (cyc), 2585 (nsec) 01:03:58.949 01:03:58.949 real 0m1.293s 01:03:58.949 ************************************ 01:03:58.949 END TEST thread_poller_perf 01:03:58.949 ************************************ 01:03:58.949 user 0m1.120s 01:03:58.949 sys 0m0.064s 01:03:58.949 11:01:04 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 01:03:58.949 11:01:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 01:03:59.206 11:01:04 thread -- common/autotest_common.sh@1142 -- # return 0 01:03:59.206 11:01:04 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 01:03:59.206 11:01:04 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 01:03:59.206 11:01:04 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 01:03:59.206 11:01:04 thread -- common/autotest_common.sh@10 -- # set +x 01:03:59.206 ************************************ 01:03:59.206 START TEST thread_poller_perf 01:03:59.206 ************************************ 01:03:59.206 11:01:04 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 01:03:59.206 [2024-07-22 11:01:04.211611] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:03:59.206 [2024-07-22 11:01:04.211711] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73664 ] 01:03:59.206 [2024-07-22 11:01:04.354915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:59.206 Running 1000 pollers for 1 seconds with 0 microseconds period. 01:03:59.206 [2024-07-22 11:01:04.403816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:00.577 ====================================== 01:04:00.577 busy:2491861088 (cyc) 01:04:00.577 total_run_count: 5320000 01:04:00.577 tsc_hz: 2490000000 (cyc) 01:04:00.577 ====================================== 01:04:00.577 poller_cost: 468 (cyc), 187 (nsec) 01:04:00.577 01:04:00.577 real 0m1.283s 01:04:00.577 user 0m1.123s 01:04:00.577 sys 0m0.054s 01:04:00.577 11:01:05 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:00.577 ************************************ 01:04:00.577 END TEST thread_poller_perf 01:04:00.577 ************************************ 01:04:00.577 11:01:05 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 01:04:00.577 11:01:05 thread -- common/autotest_common.sh@1142 -- # return 0 01:04:00.577 11:01:05 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 01:04:00.577 ************************************ 01:04:00.577 END TEST thread 01:04:00.577 ************************************ 01:04:00.577 01:04:00.577 real 0m2.849s 01:04:00.577 user 0m2.340s 01:04:00.577 sys 0m0.300s 01:04:00.577 11:01:05 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:00.577 11:01:05 thread -- common/autotest_common.sh@10 -- # set +x 01:04:00.577 11:01:05 -- common/autotest_common.sh@1142 -- # return 0 01:04:00.577 11:01:05 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 01:04:00.577 11:01:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:04:00.577 11:01:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:00.577 11:01:05 -- common/autotest_common.sh@10 -- # set +x 01:04:00.577 ************************************ 01:04:00.577 START TEST accel 01:04:00.577 ************************************ 01:04:00.577 11:01:05 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 01:04:00.577 * Looking for test storage... 01:04:00.577 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 01:04:00.577 11:01:05 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 01:04:00.577 11:01:05 accel -- accel/accel.sh@82 -- # get_expected_opcs 01:04:00.577 11:01:05 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 01:04:00.577 11:01:05 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=73733 01:04:00.577 11:01:05 accel -- accel/accel.sh@63 -- # waitforlisten 73733 01:04:00.577 11:01:05 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 01:04:00.577 11:01:05 accel -- common/autotest_common.sh@829 -- # '[' -z 73733 ']' 01:04:00.577 11:01:05 accel -- accel/accel.sh@61 -- # build_accel_config 01:04:00.577 11:01:05 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:04:00.577 11:01:05 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 01:04:00.577 11:01:05 accel -- common/autotest_common.sh@834 -- # local max_retries=100 01:04:00.577 11:01:05 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:04:00.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:04:00.577 11:01:05 accel -- common/autotest_common.sh@838 -- # xtrace_disable 01:04:00.577 11:01:05 accel -- common/autotest_common.sh@10 -- # set +x 01:04:00.577 11:01:05 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:04:00.577 11:01:05 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:04:00.577 11:01:05 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:04:00.577 11:01:05 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 01:04:00.577 11:01:05 accel -- accel/accel.sh@40 -- # local IFS=, 01:04:00.577 11:01:05 accel -- accel/accel.sh@41 -- # jq -r . 01:04:00.835 [2024-07-22 11:01:05.786470] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:00.835 [2024-07-22 11:01:05.786546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73733 ] 01:04:00.835 [2024-07-22 11:01:05.920590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:00.835 [2024-07-22 11:01:05.969489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:00.835 [2024-07-22 11:01:06.012008] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:01.767 11:01:06 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:04:01.767 11:01:06 accel -- common/autotest_common.sh@862 -- # return 0 01:04:01.767 11:01:06 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 01:04:01.767 11:01:06 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 01:04:01.767 11:01:06 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 01:04:01.767 11:01:06 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 01:04:01.767 11:01:06 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 01:04:01.767 11:01:06 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 01:04:01.767 11:01:06 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 01:04:01.767 11:01:06 accel -- common/autotest_common.sh@559 -- # xtrace_disable 01:04:01.767 11:01:06 accel -- common/autotest_common.sh@10 -- # set +x 01:04:01.767 11:01:06 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:04:01.767 11:01:06 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 01:04:01.767 11:01:06 accel -- accel/accel.sh@72 -- # IFS== 01:04:01.767 11:01:06 accel -- accel/accel.sh@72 -- # read -r opc module 01:04:01.767 11:01:06 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 01:04:01.767 11:01:06 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 01:04:01.767 11:01:06 accel -- accel/accel.sh@72 -- # IFS== 01:04:01.767 11:01:06 accel -- accel/accel.sh@72 -- # read -r opc module 01:04:01.767 11:01:06 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 01:04:01.767 11:01:06 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 01:04:01.767 11:01:06 accel -- accel/accel.sh@72 -- # IFS== 01:04:01.767 11:01:06 accel -- accel/accel.sh@72 -- # read -r opc module 01:04:01.767 11:01:06 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 01:04:01.767 11:01:06 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 01:04:01.767 11:01:06 accel -- accel/accel.sh@72 -- # IFS== 01:04:01.767 11:01:06 accel -- accel/accel.sh@72 -- # read -r opc module 01:04:01.767 11:01:06 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 01:04:01.767 11:01:06 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 01:04:01.767 11:01:06 accel -- accel/accel.sh@72 -- # IFS== 01:04:01.767 11:01:06 accel -- accel/accel.sh@72 -- # read -r opc module 01:04:01.767 11:01:06 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 01:04:01.767 11:01:06 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 01:04:01.767 11:01:06 accel -- accel/accel.sh@72 -- # IFS== 01:04:01.767 11:01:06 accel -- accel/accel.sh@72 -- # read -r opc module 01:04:01.767 11:01:06 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 01:04:01.767 11:01:06 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 01:04:01.767 11:01:06 accel -- accel/accel.sh@72 -- # IFS== 01:04:01.767 11:01:06 accel -- accel/accel.sh@72 -- # read -r opc module 01:04:01.767 11:01:06 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 01:04:01.767 11:01:06 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 01:04:01.767 11:01:06 accel -- accel/accel.sh@72 -- # IFS== 01:04:01.767 11:01:06 accel -- accel/accel.sh@72 -- # read -r opc module 01:04:01.767 11:01:06 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 01:04:01.767 11:01:06 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 01:04:01.767 11:01:06 accel -- accel/accel.sh@72 -- # IFS== 01:04:01.767 11:01:06 accel -- accel/accel.sh@72 -- # read -r opc module 01:04:01.767 11:01:06 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 01:04:01.767 11:01:06 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 01:04:01.767 11:01:06 accel -- accel/accel.sh@72 -- # IFS== 01:04:01.767 11:01:06 accel -- accel/accel.sh@72 -- # read -r opc module 01:04:01.767 11:01:06 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 01:04:01.767 11:01:06 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 01:04:01.767 11:01:06 accel -- accel/accel.sh@72 -- # IFS== 01:04:01.767 11:01:06 accel -- accel/accel.sh@72 -- # read -r opc module 01:04:01.767 11:01:06 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 01:04:01.767 11:01:06 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 01:04:01.767 11:01:06 accel -- accel/accel.sh@72 -- # IFS== 01:04:01.767 11:01:06 accel -- accel/accel.sh@72 -- # read -r opc module 01:04:01.767 11:01:06 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 01:04:01.767 11:01:06 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 01:04:01.767 11:01:06 accel -- accel/accel.sh@72 -- # IFS== 01:04:01.767 11:01:06 accel -- accel/accel.sh@72 -- # read -r opc module 01:04:01.767 11:01:06 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 01:04:01.767 11:01:06 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 01:04:01.767 11:01:06 accel -- accel/accel.sh@72 -- # IFS== 01:04:01.767 11:01:06 accel -- accel/accel.sh@72 -- # read -r opc module 01:04:01.767 11:01:06 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 01:04:01.767 11:01:06 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 01:04:01.767 11:01:06 accel -- accel/accel.sh@72 -- # IFS== 01:04:01.767 11:01:06 accel -- accel/accel.sh@72 -- # read -r opc module 01:04:01.767 11:01:06 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 01:04:01.767 11:01:06 accel -- accel/accel.sh@75 -- # killprocess 73733 01:04:01.767 11:01:06 accel -- common/autotest_common.sh@948 -- # '[' -z 73733 ']' 01:04:01.767 11:01:06 accel -- common/autotest_common.sh@952 -- # kill -0 73733 01:04:01.767 11:01:06 accel -- common/autotest_common.sh@953 -- # uname 01:04:01.767 11:01:06 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:04:01.767 11:01:06 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73733 01:04:01.767 killing process with pid 73733 01:04:01.767 11:01:06 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:04:01.767 11:01:06 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:04:01.767 11:01:06 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73733' 01:04:01.767 11:01:06 accel -- common/autotest_common.sh@967 -- # kill 73733 01:04:01.767 11:01:06 accel -- common/autotest_common.sh@972 -- # wait 73733 01:04:02.025 11:01:07 accel -- accel/accel.sh@76 -- # trap - ERR 01:04:02.025 11:01:07 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 01:04:02.025 11:01:07 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:04:02.025 11:01:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:02.025 11:01:07 accel -- common/autotest_common.sh@10 -- # set +x 01:04:02.025 11:01:07 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 01:04:02.025 11:01:07 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 01:04:02.025 11:01:07 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 01:04:02.025 11:01:07 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 01:04:02.025 11:01:07 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:04:02.025 11:01:07 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:04:02.025 11:01:07 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:04:02.025 11:01:07 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 01:04:02.025 11:01:07 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 01:04:02.025 11:01:07 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 01:04:02.025 11:01:07 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:02.025 11:01:07 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 01:04:02.025 11:01:07 accel -- common/autotest_common.sh@1142 -- # return 0 01:04:02.025 11:01:07 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 01:04:02.025 11:01:07 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 01:04:02.025 11:01:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:02.025 11:01:07 accel -- common/autotest_common.sh@10 -- # set +x 01:04:02.025 ************************************ 01:04:02.025 START TEST accel_missing_filename 01:04:02.025 ************************************ 01:04:02.025 11:01:07 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 01:04:02.025 11:01:07 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 01:04:02.025 11:01:07 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 01:04:02.025 11:01:07 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 01:04:02.025 11:01:07 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:04:02.025 11:01:07 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 01:04:02.025 11:01:07 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:04:02.025 11:01:07 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 01:04:02.025 11:01:07 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 01:04:02.025 11:01:07 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 01:04:02.025 11:01:07 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 01:04:02.025 11:01:07 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:04:02.025 11:01:07 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:04:02.025 11:01:07 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:04:02.025 11:01:07 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 01:04:02.025 11:01:07 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 01:04:02.025 11:01:07 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 01:04:02.025 [2024-07-22 11:01:07.189077] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:02.025 [2024-07-22 11:01:07.189351] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73790 ] 01:04:02.290 [2024-07-22 11:01:07.330646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:02.290 [2024-07-22 11:01:07.378919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:02.290 [2024-07-22 11:01:07.421397] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:04:02.290 [2024-07-22 11:01:07.481512] accel_perf.c:1463:main: *ERROR*: ERROR starting application 01:04:02.548 A filename is required. 01:04:02.548 11:01:07 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 01:04:02.548 11:01:07 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:04:02.548 11:01:07 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 01:04:02.548 ************************************ 01:04:02.548 END TEST accel_missing_filename 01:04:02.548 ************************************ 01:04:02.548 11:01:07 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 01:04:02.548 11:01:07 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 01:04:02.548 11:01:07 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:04:02.548 01:04:02.548 real 0m0.396s 01:04:02.548 user 0m0.225s 01:04:02.548 sys 0m0.109s 01:04:02.548 11:01:07 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:02.548 11:01:07 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 01:04:02.548 11:01:07 accel -- common/autotest_common.sh@1142 -- # return 0 01:04:02.548 11:01:07 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 01:04:02.548 11:01:07 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 01:04:02.548 11:01:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:02.548 11:01:07 accel -- common/autotest_common.sh@10 -- # set +x 01:04:02.548 ************************************ 01:04:02.548 START TEST accel_compress_verify 01:04:02.548 ************************************ 01:04:02.548 11:01:07 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 01:04:02.548 11:01:07 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 01:04:02.548 11:01:07 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 01:04:02.548 11:01:07 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 01:04:02.548 11:01:07 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:04:02.548 11:01:07 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 01:04:02.548 11:01:07 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:04:02.548 11:01:07 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 01:04:02.548 11:01:07 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 01:04:02.548 11:01:07 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 01:04:02.548 11:01:07 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 01:04:02.548 11:01:07 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:04:02.548 11:01:07 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:04:02.548 11:01:07 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:04:02.548 11:01:07 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 01:04:02.548 11:01:07 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 01:04:02.548 11:01:07 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 01:04:02.548 [2024-07-22 11:01:07.656665] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:02.548 [2024-07-22 11:01:07.656758] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73809 ] 01:04:02.806 [2024-07-22 11:01:07.798178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:02.806 [2024-07-22 11:01:07.846760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:02.806 [2024-07-22 11:01:07.889623] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:04:02.806 [2024-07-22 11:01:07.950101] accel_perf.c:1463:main: *ERROR*: ERROR starting application 01:04:03.065 01:04:03.065 Compression does not support the verify option, aborting. 01:04:03.065 11:01:08 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 01:04:03.065 11:01:08 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:04:03.065 11:01:08 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 01:04:03.065 11:01:08 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 01:04:03.065 11:01:08 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 01:04:03.065 11:01:08 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:04:03.065 01:04:03.065 real 0m0.397s 01:04:03.065 user 0m0.228s 01:04:03.066 sys 0m0.106s 01:04:03.066 11:01:08 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:03.066 11:01:08 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 01:04:03.066 ************************************ 01:04:03.066 END TEST accel_compress_verify 01:04:03.066 ************************************ 01:04:03.066 11:01:08 accel -- common/autotest_common.sh@1142 -- # return 0 01:04:03.066 11:01:08 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 01:04:03.066 11:01:08 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 01:04:03.066 11:01:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:03.066 11:01:08 accel -- common/autotest_common.sh@10 -- # set +x 01:04:03.066 ************************************ 01:04:03.066 START TEST accel_wrong_workload 01:04:03.066 ************************************ 01:04:03.066 11:01:08 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 01:04:03.066 11:01:08 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 01:04:03.066 11:01:08 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 01:04:03.066 11:01:08 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 01:04:03.066 11:01:08 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:04:03.066 11:01:08 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 01:04:03.066 11:01:08 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:04:03.066 11:01:08 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 01:04:03.066 11:01:08 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 01:04:03.066 11:01:08 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 01:04:03.066 11:01:08 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 01:04:03.066 11:01:08 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:04:03.066 11:01:08 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:04:03.066 11:01:08 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:04:03.066 11:01:08 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 01:04:03.066 11:01:08 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 01:04:03.066 11:01:08 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 01:04:03.066 Unsupported workload type: foobar 01:04:03.066 [2024-07-22 11:01:08.122219] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 01:04:03.066 accel_perf options: 01:04:03.066 [-h help message] 01:04:03.066 [-q queue depth per core] 01:04:03.066 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 01:04:03.066 [-T number of threads per core 01:04:03.066 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 01:04:03.066 [-t time in seconds] 01:04:03.066 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 01:04:03.066 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 01:04:03.066 [-M assign module to the operation, not compatible with accel_assign_opc RPC 01:04:03.066 [-l for compress/decompress workloads, name of uncompressed input file 01:04:03.066 [-S for crc32c workload, use this seed value (default 0) 01:04:03.066 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 01:04:03.066 [-f for fill workload, use this BYTE value (default 255) 01:04:03.066 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 01:04:03.066 [-y verify result if this switch is on] 01:04:03.066 [-a tasks to allocate per core (default: same value as -q)] 01:04:03.066 Can be used to spread operations across a wider range of memory. 01:04:03.066 11:01:08 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 01:04:03.066 11:01:08 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:04:03.066 11:01:08 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:04:03.066 11:01:08 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:04:03.066 01:04:03.066 real 0m0.042s 01:04:03.066 user 0m0.023s 01:04:03.066 sys 0m0.018s 01:04:03.066 11:01:08 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:03.066 11:01:08 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 01:04:03.066 ************************************ 01:04:03.066 END TEST accel_wrong_workload 01:04:03.066 ************************************ 01:04:03.066 11:01:08 accel -- common/autotest_common.sh@1142 -- # return 0 01:04:03.066 11:01:08 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 01:04:03.066 11:01:08 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 01:04:03.066 11:01:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:03.066 11:01:08 accel -- common/autotest_common.sh@10 -- # set +x 01:04:03.066 ************************************ 01:04:03.066 START TEST accel_negative_buffers 01:04:03.066 ************************************ 01:04:03.066 11:01:08 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 01:04:03.066 11:01:08 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 01:04:03.066 11:01:08 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 01:04:03.066 11:01:08 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 01:04:03.066 11:01:08 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:04:03.066 11:01:08 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 01:04:03.066 11:01:08 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:04:03.066 11:01:08 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 01:04:03.066 11:01:08 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 01:04:03.066 11:01:08 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 01:04:03.066 11:01:08 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 01:04:03.066 11:01:08 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:04:03.066 11:01:08 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:04:03.066 11:01:08 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:04:03.066 11:01:08 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 01:04:03.066 11:01:08 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 01:04:03.066 11:01:08 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 01:04:03.066 -x option must be non-negative. 01:04:03.066 [2024-07-22 11:01:08.229341] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 01:04:03.066 accel_perf options: 01:04:03.066 [-h help message] 01:04:03.066 [-q queue depth per core] 01:04:03.066 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 01:04:03.066 [-T number of threads per core 01:04:03.066 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 01:04:03.066 [-t time in seconds] 01:04:03.066 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 01:04:03.066 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 01:04:03.066 [-M assign module to the operation, not compatible with accel_assign_opc RPC 01:04:03.066 [-l for compress/decompress workloads, name of uncompressed input file 01:04:03.066 [-S for crc32c workload, use this seed value (default 0) 01:04:03.066 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 01:04:03.066 [-f for fill workload, use this BYTE value (default 255) 01:04:03.066 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 01:04:03.066 [-y verify result if this switch is on] 01:04:03.066 [-a tasks to allocate per core (default: same value as -q)] 01:04:03.066 Can be used to spread operations across a wider range of memory. 01:04:03.066 11:01:08 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 01:04:03.066 11:01:08 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:04:03.066 11:01:08 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:04:03.066 11:01:08 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:04:03.066 01:04:03.066 real 0m0.047s 01:04:03.066 user 0m0.022s 01:04:03.066 sys 0m0.022s 01:04:03.066 11:01:08 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:03.066 ************************************ 01:04:03.066 END TEST accel_negative_buffers 01:04:03.066 11:01:08 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 01:04:03.066 ************************************ 01:04:03.326 11:01:08 accel -- common/autotest_common.sh@1142 -- # return 0 01:04:03.326 11:01:08 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 01:04:03.326 11:01:08 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 01:04:03.326 11:01:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:03.326 11:01:08 accel -- common/autotest_common.sh@10 -- # set +x 01:04:03.326 ************************************ 01:04:03.326 START TEST accel_crc32c 01:04:03.326 ************************************ 01:04:03.326 11:01:08 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 01:04:03.326 11:01:08 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 01:04:03.326 11:01:08 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 01:04:03.326 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:03.326 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:03.326 11:01:08 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 01:04:03.326 11:01:08 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 01:04:03.326 11:01:08 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 01:04:03.326 11:01:08 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 01:04:03.326 11:01:08 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:04:03.326 11:01:08 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:04:03.326 11:01:08 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:04:03.326 11:01:08 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 01:04:03.326 11:01:08 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 01:04:03.326 11:01:08 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 01:04:03.326 [2024-07-22 11:01:08.328041] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:03.326 [2024-07-22 11:01:08.328132] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73873 ] 01:04:03.326 [2024-07-22 11:01:08.469926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:03.326 [2024-07-22 11:01:08.518363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 01:04:03.584 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:03.585 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:03.585 11:01:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 01:04:03.585 11:01:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:03.585 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:03.585 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:03.585 11:01:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 01:04:03.585 11:01:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:03.585 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:03.585 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:03.585 11:01:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 01:04:03.585 11:01:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:03.585 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:03.585 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:03.585 11:01:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 01:04:03.585 11:01:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:03.585 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:03.585 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:03.585 11:01:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 01:04:03.585 11:01:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:03.585 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:03.585 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:03.585 11:01:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 01:04:03.585 11:01:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:03.585 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:03.585 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:03.585 11:01:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 01:04:03.585 11:01:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:03.585 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:03.585 11:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:04.526 11:01:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 01:04:04.526 11:01:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:04.526 11:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:04.526 11:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:04.526 11:01:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 01:04:04.526 11:01:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:04.526 11:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:04.526 11:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:04.526 11:01:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 01:04:04.526 11:01:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:04.526 11:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:04.526 11:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:04.526 11:01:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 01:04:04.526 11:01:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:04.526 11:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:04.526 11:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:04.526 11:01:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 01:04:04.526 11:01:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:04.526 11:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:04.526 11:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:04.526 11:01:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 01:04:04.526 11:01:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:04.526 11:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:04.526 11:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:04.526 11:01:09 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 01:04:04.526 11:01:09 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 01:04:04.526 11:01:09 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:04:04.526 01:04:04.526 real 0m1.399s 01:04:04.526 user 0m0.021s 01:04:04.526 sys 0m0.007s 01:04:04.526 11:01:09 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:04.526 11:01:09 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 01:04:04.526 ************************************ 01:04:04.527 END TEST accel_crc32c 01:04:04.527 ************************************ 01:04:04.800 11:01:09 accel -- common/autotest_common.sh@1142 -- # return 0 01:04:04.800 11:01:09 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 01:04:04.800 11:01:09 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 01:04:04.800 11:01:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:04.800 11:01:09 accel -- common/autotest_common.sh@10 -- # set +x 01:04:04.800 ************************************ 01:04:04.800 START TEST accel_crc32c_C2 01:04:04.800 ************************************ 01:04:04.800 11:01:09 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 01:04:04.800 11:01:09 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 01:04:04.800 11:01:09 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 01:04:04.800 11:01:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:04.800 11:01:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:04.800 11:01:09 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 01:04:04.800 11:01:09 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 01:04:04.800 11:01:09 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 01:04:04.800 11:01:09 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 01:04:04.800 11:01:09 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:04:04.800 11:01:09 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:04:04.800 11:01:09 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:04:04.800 11:01:09 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 01:04:04.800 11:01:09 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 01:04:04.800 11:01:09 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 01:04:04.800 [2024-07-22 11:01:09.797195] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:04.800 [2024-07-22 11:01:09.797305] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73902 ] 01:04:04.800 [2024-07-22 11:01:09.938410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:04.800 [2024-07-22 11:01:09.988726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:05.057 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:05.058 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:05.058 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:04:05.058 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:05.058 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:05.058 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:05.058 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:04:05.058 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:05.058 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:05.058 11:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:05.989 11:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:04:05.989 11:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:05.989 11:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:05.989 11:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:05.989 11:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:04:05.989 11:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:05.989 11:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:05.989 11:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:05.989 11:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:04:05.989 11:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:05.989 11:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:05.989 11:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:05.989 11:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:04:05.989 11:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:05.989 11:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:05.989 11:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:05.989 11:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:04:05.989 11:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:05.989 11:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:05.989 11:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:05.989 11:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:04:05.989 11:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:05.989 11:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:05.989 11:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:05.989 11:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 01:04:05.989 11:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 01:04:05.989 11:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:04:05.989 01:04:05.989 real 0m1.400s 01:04:05.989 user 0m1.205s 01:04:05.989 sys 0m0.109s 01:04:05.989 11:01:11 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:05.989 11:01:11 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 01:04:05.989 ************************************ 01:04:05.989 END TEST accel_crc32c_C2 01:04:05.989 ************************************ 01:04:06.247 11:01:11 accel -- common/autotest_common.sh@1142 -- # return 0 01:04:06.247 11:01:11 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 01:04:06.247 11:01:11 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 01:04:06.247 11:01:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:06.247 11:01:11 accel -- common/autotest_common.sh@10 -- # set +x 01:04:06.247 ************************************ 01:04:06.247 START TEST accel_copy 01:04:06.247 ************************************ 01:04:06.247 11:01:11 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 01:04:06.247 11:01:11 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 01:04:06.247 11:01:11 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 01:04:06.247 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:04:06.247 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:04:06.247 11:01:11 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 01:04:06.247 11:01:11 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 01:04:06.247 11:01:11 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 01:04:06.247 11:01:11 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 01:04:06.247 11:01:11 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:04:06.247 11:01:11 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:04:06.247 11:01:11 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:04:06.247 11:01:11 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 01:04:06.247 11:01:11 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 01:04:06.247 11:01:11 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 01:04:06.247 [2024-07-22 11:01:11.268333] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:06.247 [2024-07-22 11:01:11.268426] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73938 ] 01:04:06.247 [2024-07-22 11:01:11.415581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:06.504 [2024-07-22 11:01:11.463842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@20 -- # val=software 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@20 -- # val=32 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@20 -- # val=32 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@20 -- # val=1 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:04:06.504 11:01:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:04:07.435 11:01:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 01:04:07.435 11:01:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:07.435 11:01:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:04:07.435 11:01:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:04:07.435 11:01:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 01:04:07.435 11:01:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:07.435 11:01:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:04:07.435 11:01:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:04:07.435 11:01:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 01:04:07.435 11:01:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:07.435 11:01:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:04:07.435 11:01:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:04:07.435 11:01:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 01:04:07.435 11:01:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:07.435 11:01:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:04:07.435 11:01:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:04:07.435 11:01:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 01:04:07.435 11:01:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:07.435 11:01:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:04:07.435 11:01:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:04:07.435 11:01:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 01:04:07.435 11:01:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:07.435 11:01:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:04:07.435 11:01:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:04:07.435 11:01:12 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 01:04:07.435 11:01:12 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 01:04:07.435 11:01:12 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:04:07.435 01:04:07.435 real 0m1.400s 01:04:07.435 user 0m1.207s 01:04:07.435 sys 0m0.106s 01:04:07.435 11:01:12 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:07.435 ************************************ 01:04:07.435 END TEST accel_copy 01:04:07.435 ************************************ 01:04:07.435 11:01:12 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 01:04:07.693 11:01:12 accel -- common/autotest_common.sh@1142 -- # return 0 01:04:07.693 11:01:12 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 01:04:07.693 11:01:12 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 01:04:07.693 11:01:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:07.694 11:01:12 accel -- common/autotest_common.sh@10 -- # set +x 01:04:07.694 ************************************ 01:04:07.694 START TEST accel_fill 01:04:07.694 ************************************ 01:04:07.694 11:01:12 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 01:04:07.694 11:01:12 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 01:04:07.694 11:01:12 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 01:04:07.694 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:04:07.694 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:04:07.694 11:01:12 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 01:04:07.694 11:01:12 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 01:04:07.694 11:01:12 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 01:04:07.694 11:01:12 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 01:04:07.694 11:01:12 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:04:07.694 11:01:12 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:04:07.694 11:01:12 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:04:07.694 11:01:12 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 01:04:07.694 11:01:12 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 01:04:07.694 11:01:12 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 01:04:07.694 [2024-07-22 11:01:12.738336] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:07.694 [2024-07-22 11:01:12.738418] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73971 ] 01:04:07.694 [2024-07-22 11:01:12.881542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:07.964 [2024-07-22 11:01:12.924796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:07.964 11:01:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 01:04:07.964 11:01:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:04:07.964 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:04:07.964 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:04:07.964 11:01:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 01:04:07.964 11:01:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:04:07.964 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:04:07.964 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:04:07.964 11:01:12 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 01:04:07.964 11:01:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:04:07.964 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:04:07.964 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:04:07.964 11:01:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 01:04:07.964 11:01:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:04:07.964 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:04:07.964 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:04:07.964 11:01:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 01:04:07.964 11:01:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:04:07.964 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:04:07.964 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:04:07.964 11:01:12 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 01:04:07.964 11:01:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:04:07.964 11:01:12 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 01:04:07.964 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:04:07.964 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:04:07.964 11:01:12 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@20 -- # val=software 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@20 -- # val=64 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@20 -- # val=64 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@20 -- # val=1 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:04:07.965 11:01:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:04:08.900 11:01:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 01:04:08.900 11:01:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:04:08.900 11:01:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:04:08.900 11:01:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:04:08.900 11:01:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 01:04:08.900 11:01:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:04:08.900 11:01:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:04:08.900 11:01:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:04:08.900 11:01:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 01:04:08.900 11:01:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:04:08.900 11:01:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:04:08.900 11:01:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:04:08.900 11:01:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 01:04:08.900 11:01:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:04:08.900 11:01:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:04:08.900 11:01:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:04:08.900 11:01:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 01:04:08.900 11:01:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:04:08.900 11:01:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:04:08.900 11:01:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:04:08.900 11:01:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 01:04:08.900 11:01:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:04:08.900 11:01:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:04:08.900 11:01:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:04:08.900 11:01:14 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 01:04:08.900 11:01:14 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 01:04:08.900 11:01:14 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:04:08.900 01:04:08.900 real 0m1.400s 01:04:08.900 user 0m1.194s 01:04:08.900 sys 0m0.121s 01:04:08.900 11:01:14 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:08.900 11:01:14 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 01:04:08.900 ************************************ 01:04:08.900 END TEST accel_fill 01:04:08.900 ************************************ 01:04:09.157 11:01:14 accel -- common/autotest_common.sh@1142 -- # return 0 01:04:09.157 11:01:14 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 01:04:09.157 11:01:14 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 01:04:09.157 11:01:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:09.157 11:01:14 accel -- common/autotest_common.sh@10 -- # set +x 01:04:09.157 ************************************ 01:04:09.157 START TEST accel_copy_crc32c 01:04:09.157 ************************************ 01:04:09.157 11:01:14 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 01:04:09.157 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 01:04:09.157 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 01:04:09.157 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:09.157 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 01:04:09.157 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:09.157 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 01:04:09.157 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 01:04:09.157 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 01:04:09.157 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:04:09.157 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:04:09.157 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:04:09.157 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 01:04:09.157 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 01:04:09.157 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 01:04:09.157 [2024-07-22 11:01:14.202275] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:09.157 [2024-07-22 11:01:14.202353] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74006 ] 01:04:09.157 [2024-07-22 11:01:14.345087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:09.415 [2024-07-22 11:01:14.387811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:09.415 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:09.416 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 01:04:09.416 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:09.416 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:09.416 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:09.416 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 01:04:09.416 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:09.416 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:09.416 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:09.416 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 01:04:09.416 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:09.416 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:09.416 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:09.416 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 01:04:09.416 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:09.416 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:09.416 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:09.416 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 01:04:09.416 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:09.416 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:09.416 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:09.416 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 01:04:09.416 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:09.416 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:09.416 11:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:10.350 11:01:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 01:04:10.350 11:01:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:10.350 11:01:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:10.350 11:01:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:10.350 11:01:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 01:04:10.350 11:01:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:10.350 11:01:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:10.350 11:01:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:10.350 11:01:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 01:04:10.350 11:01:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:10.350 11:01:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:10.609 11:01:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:10.609 11:01:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 01:04:10.609 11:01:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:10.609 11:01:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:10.609 11:01:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:10.609 11:01:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 01:04:10.609 11:01:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:10.609 11:01:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:10.609 11:01:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:10.609 11:01:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 01:04:10.609 11:01:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:04:10.609 11:01:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:04:10.609 11:01:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:04:10.609 11:01:15 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 01:04:10.609 11:01:15 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 01:04:10.609 11:01:15 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:04:10.609 01:04:10.609 real 0m1.388s 01:04:10.609 user 0m1.190s 01:04:10.609 sys 0m0.114s 01:04:10.609 11:01:15 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:10.609 11:01:15 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 01:04:10.609 ************************************ 01:04:10.609 END TEST accel_copy_crc32c 01:04:10.609 ************************************ 01:04:10.609 11:01:15 accel -- common/autotest_common.sh@1142 -- # return 0 01:04:10.609 11:01:15 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 01:04:10.609 11:01:15 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 01:04:10.609 11:01:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:10.609 11:01:15 accel -- common/autotest_common.sh@10 -- # set +x 01:04:10.609 ************************************ 01:04:10.609 START TEST accel_copy_crc32c_C2 01:04:10.609 ************************************ 01:04:10.609 11:01:15 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 01:04:10.609 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 01:04:10.609 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 01:04:10.609 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:10.609 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 01:04:10.609 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:10.609 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 01:04:10.609 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 01:04:10.609 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 01:04:10.609 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:04:10.609 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:04:10.609 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:04:10.609 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 01:04:10.609 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 01:04:10.609 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 01:04:10.609 [2024-07-22 11:01:15.651699] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:10.609 [2024-07-22 11:01:15.651769] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74040 ] 01:04:10.609 [2024-07-22 11:01:15.793278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:10.868 [2024-07-22 11:01:15.839058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:10.868 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:10.869 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 01:04:10.869 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:10.869 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:10.869 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:10.869 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 01:04:10.869 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:10.869 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:10.869 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:10.869 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 01:04:10.869 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:10.869 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:10.869 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:10.869 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:04:10.869 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:10.869 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:10.869 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:10.869 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:04:10.869 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:10.869 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:10.869 11:01:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:11.873 11:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:04:11.873 11:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:11.873 11:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:11.873 11:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:11.873 11:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:04:11.873 11:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:11.873 11:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:11.873 11:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:11.873 11:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:04:11.873 11:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:11.873 11:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:11.873 11:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:11.873 11:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:04:11.873 11:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:11.873 11:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:11.873 11:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:11.873 11:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:04:11.873 11:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:11.873 11:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:11.873 11:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:11.873 11:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:04:11.873 11:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:04:11.873 11:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:04:11.873 11:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:04:11.873 11:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 01:04:11.873 11:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 01:04:11.873 ************************************ 01:04:11.873 END TEST accel_copy_crc32c_C2 01:04:11.873 ************************************ 01:04:11.873 11:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:04:11.873 01:04:11.873 real 0m1.386s 01:04:11.873 user 0m1.193s 01:04:11.873 sys 0m0.103s 01:04:11.873 11:01:17 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:11.873 11:01:17 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 01:04:11.873 11:01:17 accel -- common/autotest_common.sh@1142 -- # return 0 01:04:11.873 11:01:17 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 01:04:11.873 11:01:17 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 01:04:11.873 11:01:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:11.873 11:01:17 accel -- common/autotest_common.sh@10 -- # set +x 01:04:12.131 ************************************ 01:04:12.131 START TEST accel_dualcast 01:04:12.131 ************************************ 01:04:12.131 11:01:17 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 01:04:12.131 11:01:17 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 01:04:12.131 11:01:17 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 01:04:12.131 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:04:12.131 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:04:12.131 11:01:17 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 01:04:12.131 11:01:17 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 01:04:12.131 11:01:17 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 01:04:12.131 11:01:17 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 01:04:12.131 11:01:17 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:04:12.131 11:01:17 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:04:12.131 11:01:17 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:04:12.131 11:01:17 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 01:04:12.131 11:01:17 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 01:04:12.131 11:01:17 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 01:04:12.131 [2024-07-22 11:01:17.114652] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:12.131 [2024-07-22 11:01:17.114753] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74069 ] 01:04:12.131 [2024-07-22 11:01:17.258852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:12.131 [2024-07-22 11:01:17.300441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:04:12.389 11:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:04:13.322 11:01:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 01:04:13.322 11:01:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:04:13.322 11:01:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:04:13.322 11:01:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:04:13.323 11:01:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 01:04:13.323 11:01:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:04:13.323 11:01:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:04:13.323 11:01:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:04:13.323 11:01:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 01:04:13.323 11:01:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:04:13.323 11:01:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:04:13.323 11:01:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:04:13.323 11:01:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 01:04:13.323 11:01:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:04:13.323 11:01:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:04:13.323 11:01:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:04:13.323 11:01:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 01:04:13.323 11:01:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:04:13.323 11:01:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:04:13.323 11:01:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:04:13.323 11:01:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 01:04:13.323 11:01:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:04:13.323 11:01:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:04:13.323 11:01:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:04:13.323 11:01:18 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 01:04:13.323 ************************************ 01:04:13.323 END TEST accel_dualcast 01:04:13.323 ************************************ 01:04:13.323 11:01:18 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 01:04:13.323 11:01:18 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:04:13.323 01:04:13.323 real 0m1.393s 01:04:13.323 user 0m1.195s 01:04:13.323 sys 0m0.109s 01:04:13.323 11:01:18 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:13.323 11:01:18 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 01:04:13.323 11:01:18 accel -- common/autotest_common.sh@1142 -- # return 0 01:04:13.581 11:01:18 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 01:04:13.581 11:01:18 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 01:04:13.581 11:01:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:13.581 11:01:18 accel -- common/autotest_common.sh@10 -- # set +x 01:04:13.581 ************************************ 01:04:13.581 START TEST accel_compare 01:04:13.581 ************************************ 01:04:13.581 11:01:18 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 01:04:13.581 11:01:18 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 01:04:13.581 11:01:18 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 01:04:13.581 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:04:13.581 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:04:13.581 11:01:18 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 01:04:13.581 11:01:18 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 01:04:13.581 11:01:18 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 01:04:13.581 11:01:18 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 01:04:13.581 11:01:18 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:04:13.581 11:01:18 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:04:13.581 11:01:18 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:04:13.581 11:01:18 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 01:04:13.581 11:01:18 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 01:04:13.581 11:01:18 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 01:04:13.581 [2024-07-22 11:01:18.570799] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:13.581 [2024-07-22 11:01:18.571031] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74104 ] 01:04:13.581 [2024-07-22 11:01:18.712349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:13.581 [2024-07-22 11:01:18.754829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@20 -- # val=software 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@20 -- # val=32 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@20 -- # val=32 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@20 -- # val=1 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:04:13.840 11:01:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:04:14.776 11:01:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 01:04:14.776 11:01:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:04:14.776 11:01:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:04:14.776 11:01:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:04:14.776 11:01:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 01:04:14.776 11:01:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:04:14.776 11:01:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:04:14.776 11:01:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:04:14.776 11:01:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 01:04:14.776 11:01:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:04:14.776 11:01:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:04:14.776 11:01:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:04:14.776 11:01:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 01:04:14.776 11:01:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:04:14.776 11:01:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:04:14.776 11:01:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:04:14.776 11:01:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 01:04:14.776 11:01:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:04:14.776 11:01:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:04:14.776 11:01:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:04:14.776 ************************************ 01:04:14.776 END TEST accel_compare 01:04:14.776 ************************************ 01:04:14.776 11:01:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 01:04:14.776 11:01:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:04:14.776 11:01:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:04:14.776 11:01:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:04:14.776 11:01:19 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 01:04:14.776 11:01:19 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 01:04:14.776 11:01:19 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:04:14.776 01:04:14.776 real 0m1.389s 01:04:14.776 user 0m1.196s 01:04:14.776 sys 0m0.103s 01:04:14.776 11:01:19 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:14.776 11:01:19 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 01:04:15.035 11:01:19 accel -- common/autotest_common.sh@1142 -- # return 0 01:04:15.035 11:01:19 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 01:04:15.035 11:01:19 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 01:04:15.035 11:01:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:15.035 11:01:19 accel -- common/autotest_common.sh@10 -- # set +x 01:04:15.035 ************************************ 01:04:15.035 START TEST accel_xor 01:04:15.035 ************************************ 01:04:15.035 11:01:19 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 01:04:15.035 11:01:19 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 01:04:15.035 11:01:19 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 01:04:15.035 11:01:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:15.035 11:01:19 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 01:04:15.035 11:01:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:15.035 11:01:20 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 01:04:15.035 11:01:20 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 01:04:15.035 11:01:20 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 01:04:15.035 11:01:20 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:04:15.035 11:01:20 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:04:15.035 11:01:20 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:04:15.035 11:01:20 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 01:04:15.035 11:01:20 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 01:04:15.035 11:01:20 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 01:04:15.035 [2024-07-22 11:01:20.029505] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:15.035 [2024-07-22 11:01:20.029578] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74138 ] 01:04:15.035 [2024-07-22 11:01:20.171388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:15.035 [2024-07-22 11:01:20.213543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:15.294 11:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:04:15.294 11:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:15.294 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:15.294 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:15.294 11:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:04:15.294 11:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:15.294 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:15.294 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:15.294 11:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 01:04:15.294 11:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:15.294 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val=2 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val=software 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val=32 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val=32 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val=1 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:15.295 11:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:16.234 11:01:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:04:16.234 11:01:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:16.234 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:16.234 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:16.234 11:01:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:04:16.234 11:01:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:16.234 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:16.234 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:16.234 11:01:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:04:16.234 11:01:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:16.234 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:16.234 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:16.234 11:01:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:04:16.234 11:01:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:16.234 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:16.234 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:16.234 11:01:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:04:16.234 11:01:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:16.234 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:16.234 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:16.234 11:01:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:04:16.234 11:01:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:16.234 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:16.234 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:16.234 11:01:21 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 01:04:16.234 11:01:21 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 01:04:16.234 11:01:21 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:04:16.234 01:04:16.234 real 0m1.390s 01:04:16.234 user 0m1.201s 01:04:16.234 sys 0m0.101s 01:04:16.234 11:01:21 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:16.234 ************************************ 01:04:16.234 END TEST accel_xor 01:04:16.234 ************************************ 01:04:16.234 11:01:21 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 01:04:16.493 11:01:21 accel -- common/autotest_common.sh@1142 -- # return 0 01:04:16.493 11:01:21 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 01:04:16.493 11:01:21 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 01:04:16.493 11:01:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:16.493 11:01:21 accel -- common/autotest_common.sh@10 -- # set +x 01:04:16.494 ************************************ 01:04:16.494 START TEST accel_xor 01:04:16.494 ************************************ 01:04:16.494 11:01:21 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 01:04:16.494 11:01:21 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 01:04:16.494 11:01:21 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 01:04:16.494 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:16.494 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:16.494 11:01:21 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 01:04:16.494 11:01:21 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 01:04:16.494 11:01:21 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 01:04:16.494 11:01:21 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 01:04:16.494 11:01:21 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:04:16.494 11:01:21 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:04:16.494 11:01:21 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:04:16.494 11:01:21 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 01:04:16.494 11:01:21 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 01:04:16.494 11:01:21 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 01:04:16.494 [2024-07-22 11:01:21.492409] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:16.494 [2024-07-22 11:01:21.492486] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74173 ] 01:04:16.494 [2024-07-22 11:01:21.634458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:16.494 [2024-07-22 11:01:21.675908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@20 -- # val=3 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@20 -- # val=software 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@20 -- # val=32 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@20 -- # val=32 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@20 -- # val=1 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:16.753 11:01:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:17.689 11:01:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:04:17.689 11:01:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:17.689 11:01:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:17.689 11:01:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:17.689 11:01:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:04:17.689 11:01:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:17.689 11:01:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:17.689 11:01:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:17.689 11:01:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:04:17.689 11:01:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:17.689 11:01:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:17.689 11:01:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:17.689 11:01:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:04:17.689 11:01:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:17.689 11:01:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:17.689 11:01:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:17.689 11:01:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:04:17.689 11:01:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:17.689 11:01:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:17.689 11:01:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:17.689 11:01:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:04:17.689 11:01:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:04:17.689 11:01:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:04:17.689 11:01:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:04:17.689 11:01:22 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 01:04:17.689 11:01:22 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 01:04:17.689 11:01:22 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:04:17.689 01:04:17.689 real 0m1.390s 01:04:17.689 user 0m1.179s 01:04:17.690 sys 0m0.123s 01:04:17.690 11:01:22 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:17.690 11:01:22 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 01:04:17.690 ************************************ 01:04:17.690 END TEST accel_xor 01:04:17.690 ************************************ 01:04:17.955 11:01:22 accel -- common/autotest_common.sh@1142 -- # return 0 01:04:17.955 11:01:22 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 01:04:17.955 11:01:22 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 01:04:17.955 11:01:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:17.955 11:01:22 accel -- common/autotest_common.sh@10 -- # set +x 01:04:17.955 ************************************ 01:04:17.955 START TEST accel_dif_verify 01:04:17.955 ************************************ 01:04:17.955 11:01:22 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 01:04:17.955 11:01:22 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 01:04:17.955 11:01:22 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 01:04:17.955 11:01:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:04:17.955 11:01:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:04:17.955 11:01:22 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 01:04:17.955 11:01:22 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 01:04:17.955 11:01:22 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 01:04:17.955 11:01:22 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 01:04:17.955 11:01:22 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:04:17.955 11:01:22 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:04:17.955 11:01:22 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:04:17.955 11:01:22 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 01:04:17.955 11:01:22 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 01:04:17.955 11:01:22 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 01:04:17.955 [2024-07-22 11:01:22.947438] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:17.955 [2024-07-22 11:01:22.947516] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74203 ] 01:04:17.955 [2024-07-22 11:01:23.075776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:17.955 [2024-07-22 11:01:23.115918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:04:18.230 11:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:04:19.170 11:01:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 01:04:19.170 11:01:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:04:19.170 11:01:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:04:19.170 11:01:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:04:19.170 11:01:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 01:04:19.170 11:01:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:04:19.170 11:01:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:04:19.170 11:01:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:04:19.170 11:01:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 01:04:19.170 11:01:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:04:19.170 11:01:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:04:19.170 11:01:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:04:19.170 11:01:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 01:04:19.170 11:01:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:04:19.170 11:01:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:04:19.170 11:01:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:04:19.170 11:01:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 01:04:19.170 11:01:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:04:19.170 11:01:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:04:19.170 11:01:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:04:19.170 11:01:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 01:04:19.170 11:01:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:04:19.170 11:01:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:04:19.170 ************************************ 01:04:19.170 END TEST accel_dif_verify 01:04:19.170 ************************************ 01:04:19.170 11:01:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:04:19.170 11:01:24 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 01:04:19.170 11:01:24 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 01:04:19.170 11:01:24 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:04:19.170 01:04:19.170 real 0m1.371s 01:04:19.170 user 0m1.189s 01:04:19.170 sys 0m0.093s 01:04:19.170 11:01:24 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:19.170 11:01:24 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 01:04:19.170 11:01:24 accel -- common/autotest_common.sh@1142 -- # return 0 01:04:19.170 11:01:24 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 01:04:19.170 11:01:24 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 01:04:19.170 11:01:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:19.170 11:01:24 accel -- common/autotest_common.sh@10 -- # set +x 01:04:19.170 ************************************ 01:04:19.170 START TEST accel_dif_generate 01:04:19.170 ************************************ 01:04:19.170 11:01:24 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 01:04:19.170 11:01:24 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 01:04:19.170 11:01:24 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 01:04:19.170 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:04:19.170 11:01:24 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 01:04:19.170 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:04:19.170 11:01:24 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 01:04:19.170 11:01:24 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 01:04:19.170 11:01:24 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 01:04:19.170 11:01:24 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:04:19.170 11:01:24 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:04:19.170 11:01:24 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:04:19.170 11:01:24 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 01:04:19.170 11:01:24 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 01:04:19.170 11:01:24 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 01:04:19.429 [2024-07-22 11:01:24.389021] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:19.430 [2024-07-22 11:01:24.389114] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74242 ] 01:04:19.430 [2024-07-22 11:01:24.535697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:19.430 [2024-07-22 11:01:24.577177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:04:19.430 11:01:24 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 01:04:19.689 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:04:19.689 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:04:19.689 11:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 01:04:19.689 11:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:04:19.689 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:04:19.689 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:04:19.689 11:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 01:04:19.689 11:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:04:19.689 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:04:19.689 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:04:19.689 11:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 01:04:19.689 11:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:04:19.689 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:04:19.689 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:04:19.689 11:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 01:04:19.689 11:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:04:19.689 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:04:19.689 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:04:19.689 11:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 01:04:19.689 11:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:04:19.689 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:04:19.689 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:04:19.689 11:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 01:04:19.689 11:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:04:19.689 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:04:19.689 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:04:19.689 11:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 01:04:19.689 11:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:04:19.689 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:04:19.689 11:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:04:20.626 11:01:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 01:04:20.626 11:01:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:04:20.626 11:01:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:04:20.626 11:01:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:04:20.626 11:01:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 01:04:20.626 11:01:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:04:20.626 11:01:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:04:20.626 11:01:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:04:20.626 11:01:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 01:04:20.626 11:01:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:04:20.626 11:01:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:04:20.626 11:01:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:04:20.626 11:01:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 01:04:20.626 11:01:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:04:20.627 11:01:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:04:20.627 11:01:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:04:20.627 11:01:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 01:04:20.627 11:01:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:04:20.627 11:01:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:04:20.627 11:01:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:04:20.627 11:01:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 01:04:20.627 11:01:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:04:20.627 11:01:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:04:20.627 11:01:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:04:20.627 11:01:25 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 01:04:20.627 11:01:25 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 01:04:20.627 11:01:25 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:04:20.627 01:04:20.627 real 0m1.394s 01:04:20.627 user 0m1.202s 01:04:20.627 sys 0m0.107s 01:04:20.627 11:01:25 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:20.627 11:01:25 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 01:04:20.627 ************************************ 01:04:20.627 END TEST accel_dif_generate 01:04:20.627 ************************************ 01:04:20.627 11:01:25 accel -- common/autotest_common.sh@1142 -- # return 0 01:04:20.627 11:01:25 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 01:04:20.627 11:01:25 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 01:04:20.627 11:01:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:20.627 11:01:25 accel -- common/autotest_common.sh@10 -- # set +x 01:04:20.627 ************************************ 01:04:20.627 START TEST accel_dif_generate_copy 01:04:20.627 ************************************ 01:04:20.627 11:01:25 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 01:04:20.627 11:01:25 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 01:04:20.627 11:01:25 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 01:04:20.627 11:01:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:04:20.627 11:01:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:04:20.627 11:01:25 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 01:04:20.885 11:01:25 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 01:04:20.885 11:01:25 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 01:04:20.885 11:01:25 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 01:04:20.885 11:01:25 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:04:20.885 11:01:25 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:04:20.885 11:01:25 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:04:20.885 11:01:25 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 01:04:20.885 11:01:25 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 01:04:20.885 11:01:25 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 01:04:20.885 [2024-07-22 11:01:25.858290] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:20.885 [2024-07-22 11:01:25.858370] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74271 ] 01:04:20.885 [2024-07-22 11:01:25.998573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:20.885 [2024-07-22 11:01:26.041175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:20.885 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 01:04:20.885 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:20.885 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:04:20.885 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:04:20.885 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 01:04:20.885 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:20.885 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:04:20.886 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:04:20.886 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 01:04:20.886 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:20.886 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:04:20.886 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:04:20.886 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 01:04:20.886 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:20.886 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:04:20.886 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:04:20.886 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 01:04:20.886 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:20.886 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:04:20.886 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:04:20.886 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 01:04:20.886 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:20.886 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 01:04:20.886 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:04:21.144 11:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:04:22.079 11:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 01:04:22.079 11:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:22.079 11:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:04:22.079 11:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:04:22.079 11:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 01:04:22.079 11:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:22.079 11:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:04:22.079 11:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:04:22.079 11:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 01:04:22.079 11:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:22.079 11:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:04:22.079 11:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:04:22.079 11:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 01:04:22.079 11:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:22.079 11:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:04:22.079 11:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:04:22.079 11:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 01:04:22.079 11:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:22.079 11:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:04:22.079 11:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:04:22.079 11:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 01:04:22.079 11:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:04:22.079 11:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:04:22.079 11:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:04:22.079 11:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 01:04:22.079 11:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 01:04:22.079 11:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:04:22.079 01:04:22.079 real 0m1.388s 01:04:22.079 user 0m1.199s 01:04:22.079 sys 0m0.103s 01:04:22.079 ************************************ 01:04:22.079 END TEST accel_dif_generate_copy 01:04:22.079 ************************************ 01:04:22.079 11:01:27 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:22.079 11:01:27 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 01:04:22.079 11:01:27 accel -- common/autotest_common.sh@1142 -- # return 0 01:04:22.079 11:01:27 accel -- accel/accel.sh@115 -- # [[ y == y ]] 01:04:22.079 11:01:27 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 01:04:22.079 11:01:27 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 01:04:22.079 11:01:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:22.079 11:01:27 accel -- common/autotest_common.sh@10 -- # set +x 01:04:22.337 ************************************ 01:04:22.338 START TEST accel_comp 01:04:22.338 ************************************ 01:04:22.338 11:01:27 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 01:04:22.338 11:01:27 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 01:04:22.338 11:01:27 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 01:04:22.338 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:04:22.338 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:04:22.338 11:01:27 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 01:04:22.338 11:01:27 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 01:04:22.338 11:01:27 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 01:04:22.338 11:01:27 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 01:04:22.338 11:01:27 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:04:22.338 11:01:27 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:04:22.338 11:01:27 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:04:22.338 11:01:27 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 01:04:22.338 11:01:27 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 01:04:22.338 11:01:27 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 01:04:22.338 [2024-07-22 11:01:27.322906] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:22.338 [2024-07-22 11:01:27.323134] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74305 ] 01:04:22.338 [2024-07-22 11:01:27.470533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:22.338 [2024-07-22 11:01:27.514172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val=software 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val=32 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val=32 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val=1 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val=No 01:04:22.596 11:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:04:22.597 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:04:22.597 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:04:22.597 11:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 01:04:22.597 11:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:04:22.597 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:04:22.597 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:04:22.597 11:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 01:04:22.597 11:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:04:22.597 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:04:22.597 11:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:04:23.533 11:01:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 01:04:23.533 11:01:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:04:23.533 11:01:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:04:23.533 11:01:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:04:23.533 11:01:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 01:04:23.533 11:01:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:04:23.533 11:01:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:04:23.533 11:01:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:04:23.533 11:01:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 01:04:23.533 11:01:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:04:23.533 11:01:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:04:23.533 11:01:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:04:23.533 11:01:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 01:04:23.533 11:01:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:04:23.533 11:01:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:04:23.533 11:01:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:04:23.533 11:01:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 01:04:23.533 11:01:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:04:23.533 11:01:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:04:23.533 11:01:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:04:23.533 11:01:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 01:04:23.533 11:01:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:04:23.533 11:01:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:04:23.533 11:01:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:04:23.533 11:01:28 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 01:04:23.533 11:01:28 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 01:04:23.533 11:01:28 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:04:23.533 01:04:23.533 real 0m1.401s 01:04:23.533 user 0m1.205s 01:04:23.533 sys 0m0.107s 01:04:23.533 11:01:28 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:23.533 ************************************ 01:04:23.533 END TEST accel_comp 01:04:23.533 ************************************ 01:04:23.533 11:01:28 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 01:04:23.793 11:01:28 accel -- common/autotest_common.sh@1142 -- # return 0 01:04:23.793 11:01:28 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 01:04:23.793 11:01:28 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 01:04:23.793 11:01:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:23.793 11:01:28 accel -- common/autotest_common.sh@10 -- # set +x 01:04:23.793 ************************************ 01:04:23.793 START TEST accel_decomp 01:04:23.793 ************************************ 01:04:23.793 11:01:28 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 01:04:23.793 11:01:28 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 01:04:23.793 11:01:28 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 01:04:23.793 11:01:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:04:23.793 11:01:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:04:23.793 11:01:28 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 01:04:23.793 11:01:28 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 01:04:23.793 11:01:28 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 01:04:23.793 11:01:28 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 01:04:23.793 11:01:28 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:04:23.793 11:01:28 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:04:23.793 11:01:28 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:04:23.793 11:01:28 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 01:04:23.793 11:01:28 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 01:04:23.793 11:01:28 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 01:04:23.793 [2024-07-22 11:01:28.790705] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:23.793 [2024-07-22 11:01:28.790784] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74340 ] 01:04:23.793 [2024-07-22 11:01:28.923776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:23.793 [2024-07-22 11:01:28.965221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:04:24.052 11:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 01:04:24.053 11:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:04:24.053 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:04:24.053 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:04:24.053 11:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 01:04:24.053 11:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:04:24.053 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:04:24.053 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:04:24.053 11:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 01:04:24.053 11:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:04:24.053 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:04:24.053 11:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:04:24.989 11:01:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 01:04:24.989 11:01:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:04:24.989 11:01:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:04:24.989 11:01:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:04:24.989 11:01:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 01:04:24.989 11:01:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:04:24.989 11:01:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:04:24.989 11:01:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:04:24.989 11:01:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 01:04:24.989 11:01:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:04:24.989 11:01:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:04:24.989 11:01:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:04:24.989 11:01:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 01:04:24.989 11:01:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:04:24.989 11:01:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:04:24.989 11:01:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:04:24.989 11:01:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 01:04:24.989 11:01:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:04:24.989 11:01:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:04:24.989 11:01:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:04:24.989 11:01:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 01:04:24.989 11:01:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:04:24.989 11:01:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:04:24.989 11:01:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:04:24.989 11:01:30 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 01:04:24.989 11:01:30 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 01:04:24.989 11:01:30 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:04:24.989 01:04:24.989 real 0m1.388s 01:04:24.989 user 0m1.205s 01:04:24.989 sys 0m0.095s 01:04:24.989 11:01:30 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:24.989 ************************************ 01:04:24.989 END TEST accel_decomp 01:04:24.989 ************************************ 01:04:24.989 11:01:30 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 01:04:25.249 11:01:30 accel -- common/autotest_common.sh@1142 -- # return 0 01:04:25.249 11:01:30 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 01:04:25.249 11:01:30 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 01:04:25.249 11:01:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:25.249 11:01:30 accel -- common/autotest_common.sh@10 -- # set +x 01:04:25.249 ************************************ 01:04:25.249 START TEST accel_decomp_full 01:04:25.249 ************************************ 01:04:25.249 11:01:30 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 01:04:25.249 11:01:30 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 01:04:25.249 11:01:30 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 01:04:25.249 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:04:25.249 11:01:30 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 01:04:25.249 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:04:25.249 11:01:30 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 01:04:25.249 11:01:30 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 01:04:25.249 11:01:30 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 01:04:25.249 11:01:30 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:04:25.249 11:01:30 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:04:25.249 11:01:30 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:04:25.249 11:01:30 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 01:04:25.249 11:01:30 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 01:04:25.249 11:01:30 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 01:04:25.249 [2024-07-22 11:01:30.246458] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:25.249 [2024-07-22 11:01:30.246538] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74369 ] 01:04:25.249 [2024-07-22 11:01:30.386936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:25.249 [2024-07-22 11:01:30.430576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 01:04:25.509 11:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:04:25.510 11:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:04:26.544 11:01:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 01:04:26.544 11:01:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:04:26.544 11:01:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:04:26.544 11:01:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:04:26.544 11:01:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 01:04:26.544 11:01:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:04:26.544 11:01:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:04:26.544 11:01:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:04:26.544 11:01:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 01:04:26.544 11:01:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:04:26.544 11:01:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:04:26.544 11:01:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:04:26.544 11:01:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 01:04:26.544 11:01:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:04:26.544 11:01:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:04:26.544 11:01:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:04:26.544 11:01:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 01:04:26.544 11:01:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:04:26.544 11:01:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:04:26.544 11:01:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:04:26.544 11:01:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 01:04:26.544 11:01:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:04:26.544 11:01:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:04:26.544 11:01:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:04:26.544 11:01:31 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 01:04:26.544 11:01:31 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 01:04:26.544 11:01:31 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:04:26.544 01:04:26.544 real 0m1.404s 01:04:26.544 user 0m1.209s 01:04:26.544 sys 0m0.106s 01:04:26.544 11:01:31 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:26.544 11:01:31 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 01:04:26.544 ************************************ 01:04:26.544 END TEST accel_decomp_full 01:04:26.544 ************************************ 01:04:26.544 11:01:31 accel -- common/autotest_common.sh@1142 -- # return 0 01:04:26.544 11:01:31 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 01:04:26.544 11:01:31 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 01:04:26.544 11:01:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:26.544 11:01:31 accel -- common/autotest_common.sh@10 -- # set +x 01:04:26.544 ************************************ 01:04:26.544 START TEST accel_decomp_mcore 01:04:26.544 ************************************ 01:04:26.544 11:01:31 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 01:04:26.544 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 01:04:26.544 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 01:04:26.544 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:26.544 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:26.544 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 01:04:26.544 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 01:04:26.544 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 01:04:26.544 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 01:04:26.544 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:04:26.544 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:04:26.544 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:04:26.544 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 01:04:26.544 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 01:04:26.544 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 01:04:26.544 [2024-07-22 11:01:31.722223] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:26.544 [2024-07-22 11:01:31.722453] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74409 ] 01:04:26.803 [2024-07-22 11:01:31.858537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:04:26.803 [2024-07-22 11:01:31.903668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:04:26.803 [2024-07-22 11:01:31.903825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:04:26.803 [2024-07-22 11:01:31.903904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:26.803 [2024-07-22 11:01:31.903909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:26.803 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:26.804 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:26.804 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 01:04:26.804 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:26.804 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:26.804 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:26.804 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:04:26.804 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:26.804 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:26.804 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:26.804 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:04:26.804 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:26.804 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:26.804 11:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:28.179 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:04:28.179 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:28.179 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:28.179 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:28.179 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:04:28.179 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:28.179 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:28.179 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:28.179 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:04:28.179 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:28.179 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:28.179 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:28.179 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:04:28.179 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:28.179 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:28.179 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:28.179 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:04:28.179 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:28.179 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:28.179 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:28.179 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:04:28.180 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:28.180 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:28.180 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:28.180 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:04:28.180 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:28.180 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:28.180 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:28.180 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:04:28.180 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:28.180 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:28.180 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:28.180 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:04:28.180 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:28.180 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:28.180 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:28.180 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 01:04:28.180 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 01:04:28.180 11:01:33 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:04:28.180 01:04:28.180 real 0m1.402s 01:04:28.180 user 0m4.505s 01:04:28.180 sys 0m0.119s 01:04:28.180 11:01:33 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:28.180 11:01:33 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 01:04:28.180 ************************************ 01:04:28.180 END TEST accel_decomp_mcore 01:04:28.180 ************************************ 01:04:28.180 11:01:33 accel -- common/autotest_common.sh@1142 -- # return 0 01:04:28.180 11:01:33 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 01:04:28.180 11:01:33 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 01:04:28.180 11:01:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:28.180 11:01:33 accel -- common/autotest_common.sh@10 -- # set +x 01:04:28.180 ************************************ 01:04:28.180 START TEST accel_decomp_full_mcore 01:04:28.180 ************************************ 01:04:28.180 11:01:33 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 01:04:28.180 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 01:04:28.180 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 01:04:28.180 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:28.180 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:28.180 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 01:04:28.180 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 01:04:28.180 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 01:04:28.180 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 01:04:28.180 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:04:28.180 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:04:28.180 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:04:28.180 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 01:04:28.180 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 01:04:28.180 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 01:04:28.180 [2024-07-22 11:01:33.191649] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:28.180 [2024-07-22 11:01:33.191727] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74442 ] 01:04:28.180 [2024-07-22 11:01:33.327879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:04:28.180 [2024-07-22 11:01:33.372226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:04:28.180 [2024-07-22 11:01:33.372413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:04:28.180 [2024-07-22 11:01:33.372592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:04:28.180 [2024-07-22 11:01:33.372638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:28.439 11:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:04:29.374 01:04:29.374 real 0m1.408s 01:04:29.374 user 0m4.538s 01:04:29.374 sys 0m0.125s 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:29.374 11:01:34 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 01:04:29.374 ************************************ 01:04:29.374 END TEST accel_decomp_full_mcore 01:04:29.374 ************************************ 01:04:29.633 11:01:34 accel -- common/autotest_common.sh@1142 -- # return 0 01:04:29.633 11:01:34 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 01:04:29.633 11:01:34 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 01:04:29.633 11:01:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:29.633 11:01:34 accel -- common/autotest_common.sh@10 -- # set +x 01:04:29.633 ************************************ 01:04:29.633 START TEST accel_decomp_mthread 01:04:29.633 ************************************ 01:04:29.633 11:01:34 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 01:04:29.633 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 01:04:29.633 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 01:04:29.633 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:29.633 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:29.633 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 01:04:29.633 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 01:04:29.633 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 01:04:29.633 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 01:04:29.633 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:04:29.633 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:04:29.633 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:04:29.633 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 01:04:29.633 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 01:04:29.633 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 01:04:29.633 [2024-07-22 11:01:34.672012] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:29.633 [2024-07-22 11:01:34.672087] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74484 ] 01:04:29.633 [2024-07-22 11:01:34.814719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:29.890 [2024-07-22 11:01:34.858064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:29.890 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:29.891 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 01:04:29.891 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:29.891 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:29.891 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:29.891 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 01:04:29.891 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:29.891 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:29.891 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:29.891 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 01:04:29.891 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:29.891 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:29.891 11:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:31.265 11:01:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 01:04:31.265 11:01:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:31.265 11:01:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:31.265 11:01:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:31.265 11:01:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 01:04:31.265 11:01:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:31.265 11:01:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:31.265 11:01:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:31.265 11:01:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 01:04:31.265 11:01:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:31.265 11:01:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:31.265 11:01:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:31.265 11:01:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 01:04:31.265 11:01:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:31.265 11:01:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:31.265 11:01:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:31.265 11:01:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 01:04:31.265 11:01:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:31.265 11:01:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:31.265 11:01:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:31.265 11:01:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 01:04:31.265 11:01:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:31.265 11:01:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:31.265 11:01:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:31.265 11:01:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 01:04:31.265 11:01:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:31.265 11:01:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:31.265 11:01:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:31.265 11:01:36 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 01:04:31.265 11:01:36 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 01:04:31.265 11:01:36 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:04:31.265 01:04:31.265 real 0m1.400s 01:04:31.265 user 0m1.203s 01:04:31.265 sys 0m0.109s 01:04:31.265 11:01:36 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:31.265 11:01:36 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 01:04:31.265 ************************************ 01:04:31.265 END TEST accel_decomp_mthread 01:04:31.265 ************************************ 01:04:31.265 11:01:36 accel -- common/autotest_common.sh@1142 -- # return 0 01:04:31.265 11:01:36 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 01:04:31.265 11:01:36 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 01:04:31.265 11:01:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:31.265 11:01:36 accel -- common/autotest_common.sh@10 -- # set +x 01:04:31.265 ************************************ 01:04:31.265 START TEST accel_decomp_full_mthread 01:04:31.265 ************************************ 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 01:04:31.265 [2024-07-22 11:01:36.144496] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:31.265 [2024-07-22 11:01:36.144576] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74514 ] 01:04:31.265 [2024-07-22 11:01:36.285206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:31.265 [2024-07-22 11:01:36.329014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:31.265 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:31.266 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:31.266 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 01:04:31.266 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:31.266 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:31.266 11:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:32.643 11:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 01:04:32.643 11:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:32.643 11:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:32.643 11:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:32.643 11:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 01:04:32.643 11:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:32.643 11:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:32.643 11:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:32.643 11:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 01:04:32.643 11:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:32.643 11:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:32.643 11:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:32.643 11:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 01:04:32.643 11:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:32.643 11:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:32.643 11:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:32.643 11:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 01:04:32.643 11:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:32.643 11:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:32.643 11:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:32.643 11:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 01:04:32.643 11:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:32.643 11:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:32.643 11:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:32.643 11:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 01:04:32.643 11:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:04:32.643 11:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:04:32.643 11:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:04:32.643 11:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 01:04:32.643 11:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 01:04:32.643 11:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:04:32.643 01:04:32.643 real 0m1.418s 01:04:32.643 user 0m1.228s 01:04:32.643 sys 0m0.103s 01:04:32.643 11:01:37 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:32.643 11:01:37 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 01:04:32.643 ************************************ 01:04:32.643 END TEST accel_decomp_full_mthread 01:04:32.643 ************************************ 01:04:32.643 11:01:37 accel -- common/autotest_common.sh@1142 -- # return 0 01:04:32.643 11:01:37 accel -- accel/accel.sh@124 -- # [[ n == y ]] 01:04:32.643 11:01:37 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 01:04:32.643 11:01:37 accel -- accel/accel.sh@137 -- # build_accel_config 01:04:32.643 11:01:37 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 01:04:32.643 11:01:37 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 01:04:32.643 11:01:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:32.643 11:01:37 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:04:32.643 11:01:37 accel -- common/autotest_common.sh@10 -- # set +x 01:04:32.643 11:01:37 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:04:32.643 11:01:37 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:04:32.643 11:01:37 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 01:04:32.643 11:01:37 accel -- accel/accel.sh@40 -- # local IFS=, 01:04:32.643 11:01:37 accel -- accel/accel.sh@41 -- # jq -r . 01:04:32.643 ************************************ 01:04:32.643 START TEST accel_dif_functional_tests 01:04:32.643 ************************************ 01:04:32.643 11:01:37 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 01:04:32.643 [2024-07-22 11:01:37.662408] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:32.643 [2024-07-22 11:01:37.662498] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74544 ] 01:04:32.643 [2024-07-22 11:01:37.805750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 01:04:32.902 [2024-07-22 11:01:37.850559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:04:32.902 [2024-07-22 11:01:37.850743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:32.902 [2024-07-22 11:01:37.850744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:04:32.902 [2024-07-22 11:01:37.893789] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:32.902 01:04:32.902 01:04:32.902 CUnit - A unit testing framework for C - Version 2.1-3 01:04:32.902 http://cunit.sourceforge.net/ 01:04:32.902 01:04:32.902 01:04:32.902 Suite: accel_dif 01:04:32.902 Test: verify: DIF generated, GUARD check ...passed 01:04:32.902 Test: verify: DIF generated, APPTAG check ...passed 01:04:32.902 Test: verify: DIF generated, REFTAG check ...passed 01:04:32.902 Test: verify: DIF not generated, GUARD check ...passed 01:04:32.902 Test: verify: DIF not generated, APPTAG check ...passed 01:04:32.902 Test: verify: DIF not generated, REFTAG check ...passed 01:04:32.902 Test: verify: APPTAG correct, APPTAG check ...[2024-07-22 11:01:37.917036] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 01:04:32.902 [2024-07-22 11:01:37.917091] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 01:04:32.902 [2024-07-22 11:01:37.917115] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 01:04:32.902 passed 01:04:32.902 Test: verify: APPTAG incorrect, APPTAG check ...passed 01:04:32.902 Test: verify: APPTAG incorrect, no APPTAG check ...passed 01:04:32.902 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 01:04:32.902 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 01:04:32.902 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 01:04:32.902 Test: verify copy: DIF generated, GUARD check ...passed 01:04:32.902 Test: verify copy: DIF generated, APPTAG check ...passed 01:04:32.902 Test: verify copy: DIF generated, REFTAG check ...passed 01:04:32.902 Test: verify copy: DIF not generated, GUARD check ...passed 01:04:32.902 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-22 11:01:37.917167] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 01:04:32.902 [2024-07-22 11:01:37.917278] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 01:04:32.902 [2024-07-22 11:01:37.917407] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 01:04:32.902 passed 01:04:32.902 Test: verify copy: DIF not generated, REFTAG check ...passed 01:04:32.902 Test: generate copy: DIF generated, GUARD check ...passed 01:04:32.902 Test: generate copy: DIF generated, APTTAG check ...passed 01:04:32.902 Test: generate copy: DIF generated, REFTAG check ...passed 01:04:32.902 Test: generate copy: DIF generated, no GUARD check flag set ...passed 01:04:32.902 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 01:04:32.902 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 01:04:32.902 Test: generate copy: iovecs-len validate ...passed 01:04:32.902 Test: generate copy: buffer alignment validate ...passed 01:04:32.902 01:04:32.902 [2024-07-22 11:01:37.917432] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 01:04:32.902 [2024-07-22 11:01:37.917457] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 01:04:32.902 [2024-07-22 11:01:37.917634] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 01:04:32.902 Run Summary: Type Total Ran Passed Failed Inactive 01:04:32.902 suites 1 1 n/a 0 0 01:04:32.902 tests 26 26 26 0 0 01:04:32.902 asserts 115 115 115 0 n/a 01:04:32.902 01:04:32.902 Elapsed time = 0.002 seconds 01:04:32.903 01:04:32.903 real 0m0.481s 01:04:32.903 user 0m0.589s 01:04:32.903 sys 0m0.138s 01:04:32.903 11:01:38 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:32.903 11:01:38 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 01:04:32.903 ************************************ 01:04:32.903 END TEST accel_dif_functional_tests 01:04:32.903 ************************************ 01:04:33.162 11:01:38 accel -- common/autotest_common.sh@1142 -- # return 0 01:04:33.162 01:04:33.162 real 0m32.546s 01:04:33.162 user 0m33.853s 01:04:33.162 sys 0m4.127s 01:04:33.162 11:01:38 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:33.162 11:01:38 accel -- common/autotest_common.sh@10 -- # set +x 01:04:33.162 ************************************ 01:04:33.162 END TEST accel 01:04:33.162 ************************************ 01:04:33.162 11:01:38 -- common/autotest_common.sh@1142 -- # return 0 01:04:33.162 11:01:38 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 01:04:33.162 11:01:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:04:33.162 11:01:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:33.162 11:01:38 -- common/autotest_common.sh@10 -- # set +x 01:04:33.162 ************************************ 01:04:33.162 START TEST accel_rpc 01:04:33.162 ************************************ 01:04:33.162 11:01:38 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 01:04:33.162 * Looking for test storage... 01:04:33.162 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 01:04:33.162 11:01:38 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 01:04:33.162 11:01:38 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=74614 01:04:33.162 11:01:38 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 01:04:33.162 11:01:38 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 74614 01:04:33.162 11:01:38 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 74614 ']' 01:04:33.162 11:01:38 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:04:33.162 11:01:38 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 01:04:33.162 11:01:38 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:04:33.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:04:33.162 11:01:38 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 01:04:33.162 11:01:38 accel_rpc -- common/autotest_common.sh@10 -- # set +x 01:04:33.421 [2024-07-22 11:01:38.393926] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:33.421 [2024-07-22 11:01:38.393993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74614 ] 01:04:33.421 [2024-07-22 11:01:38.536450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:33.421 [2024-07-22 11:01:38.578703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:34.354 11:01:39 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:04:34.354 11:01:39 accel_rpc -- common/autotest_common.sh@862 -- # return 0 01:04:34.354 11:01:39 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 01:04:34.354 11:01:39 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 01:04:34.354 11:01:39 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 01:04:34.354 11:01:39 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 01:04:34.354 11:01:39 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 01:04:34.354 11:01:39 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:04:34.354 11:01:39 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:34.354 11:01:39 accel_rpc -- common/autotest_common.sh@10 -- # set +x 01:04:34.354 ************************************ 01:04:34.354 START TEST accel_assign_opcode 01:04:34.354 ************************************ 01:04:34.354 11:01:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 01:04:34.354 11:01:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 01:04:34.354 11:01:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 01:04:34.354 11:01:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 01:04:34.354 [2024-07-22 11:01:39.238135] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 01:04:34.354 11:01:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:04:34.354 11:01:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 01:04:34.354 11:01:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 01:04:34.354 11:01:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 01:04:34.354 [2024-07-22 11:01:39.250109] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 01:04:34.354 11:01:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:04:34.354 11:01:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 01:04:34.354 11:01:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 01:04:34.354 11:01:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 01:04:34.354 [2024-07-22 11:01:39.299476] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:34.354 11:01:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:04:34.354 11:01:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 01:04:34.354 11:01:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 01:04:34.354 11:01:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 01:04:34.354 11:01:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 01:04:34.354 11:01:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 01:04:34.354 11:01:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:04:34.354 software 01:04:34.354 ************************************ 01:04:34.354 END TEST accel_assign_opcode 01:04:34.354 ************************************ 01:04:34.354 01:04:34.354 real 0m0.233s 01:04:34.354 user 0m0.053s 01:04:34.354 sys 0m0.011s 01:04:34.354 11:01:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:34.354 11:01:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 01:04:34.354 11:01:39 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 01:04:34.354 11:01:39 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 74614 01:04:34.354 11:01:39 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 74614 ']' 01:04:34.354 11:01:39 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 74614 01:04:34.354 11:01:39 accel_rpc -- common/autotest_common.sh@953 -- # uname 01:04:34.354 11:01:39 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:04:34.354 11:01:39 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74614 01:04:34.612 killing process with pid 74614 01:04:34.612 11:01:39 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:04:34.612 11:01:39 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:04:34.612 11:01:39 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74614' 01:04:34.612 11:01:39 accel_rpc -- common/autotest_common.sh@967 -- # kill 74614 01:04:34.612 11:01:39 accel_rpc -- common/autotest_common.sh@972 -- # wait 74614 01:04:34.870 01:04:34.870 real 0m1.658s 01:04:34.870 user 0m1.645s 01:04:34.870 sys 0m0.446s 01:04:34.870 ************************************ 01:04:34.870 END TEST accel_rpc 01:04:34.870 ************************************ 01:04:34.870 11:01:39 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:34.870 11:01:39 accel_rpc -- common/autotest_common.sh@10 -- # set +x 01:04:34.870 11:01:39 -- common/autotest_common.sh@1142 -- # return 0 01:04:34.870 11:01:39 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 01:04:34.870 11:01:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:04:34.870 11:01:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:34.870 11:01:39 -- common/autotest_common.sh@10 -- # set +x 01:04:34.870 ************************************ 01:04:34.870 START TEST app_cmdline 01:04:34.870 ************************************ 01:04:34.870 11:01:39 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 01:04:34.870 * Looking for test storage... 01:04:34.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 01:04:34.870 11:01:40 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 01:04:34.870 11:01:40 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 01:04:34.870 11:01:40 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=74707 01:04:34.870 11:01:40 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 74707 01:04:34.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:04:34.870 11:01:40 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 74707 ']' 01:04:34.870 11:01:40 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:04:34.870 11:01:40 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 01:04:34.870 11:01:40 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:04:34.870 11:01:40 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 01:04:34.870 11:01:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 01:04:35.128 [2024-07-22 11:01:40.120719] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:35.128 [2024-07-22 11:01:40.120791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74707 ] 01:04:35.128 [2024-07-22 11:01:40.262771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:35.128 [2024-07-22 11:01:40.311331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:35.386 [2024-07-22 11:01:40.353377] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:35.386 11:01:40 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:04:35.386 11:01:40 app_cmdline -- common/autotest_common.sh@862 -- # return 0 01:04:35.386 11:01:40 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 01:04:35.644 { 01:04:35.644 "version": "SPDK v24.09-pre git sha1 8fb860b73", 01:04:35.644 "fields": { 01:04:35.644 "major": 24, 01:04:35.644 "minor": 9, 01:04:35.644 "patch": 0, 01:04:35.644 "suffix": "-pre", 01:04:35.644 "commit": "8fb860b73" 01:04:35.644 } 01:04:35.644 } 01:04:35.644 11:01:40 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 01:04:35.644 11:01:40 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 01:04:35.644 11:01:40 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 01:04:35.644 11:01:40 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 01:04:35.644 11:01:40 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 01:04:35.644 11:01:40 app_cmdline -- app/cmdline.sh@26 -- # sort 01:04:35.644 11:01:40 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 01:04:35.644 11:01:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 01:04:35.644 11:01:40 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 01:04:35.644 11:01:40 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:04:35.644 11:01:40 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 01:04:35.644 11:01:40 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 01:04:35.644 11:01:40 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 01:04:35.644 11:01:40 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 01:04:35.644 11:01:40 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 01:04:35.644 11:01:40 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:04:35.644 11:01:40 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:04:35.644 11:01:40 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:04:35.644 11:01:40 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:04:35.644 11:01:40 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:04:35.644 11:01:40 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:04:35.644 11:01:40 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:04:35.644 11:01:40 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 01:04:35.644 11:01:40 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 01:04:35.902 request: 01:04:35.902 { 01:04:35.902 "method": "env_dpdk_get_mem_stats", 01:04:35.902 "req_id": 1 01:04:35.902 } 01:04:35.902 Got JSON-RPC error response 01:04:35.902 response: 01:04:35.902 { 01:04:35.902 "code": -32601, 01:04:35.902 "message": "Method not found" 01:04:35.902 } 01:04:35.902 11:01:40 app_cmdline -- common/autotest_common.sh@651 -- # es=1 01:04:35.902 11:01:40 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:04:35.902 11:01:40 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:04:35.902 11:01:40 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:04:35.902 11:01:40 app_cmdline -- app/cmdline.sh@1 -- # killprocess 74707 01:04:35.902 11:01:40 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 74707 ']' 01:04:35.902 11:01:40 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 74707 01:04:35.902 11:01:40 app_cmdline -- common/autotest_common.sh@953 -- # uname 01:04:35.902 11:01:40 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:04:35.902 11:01:40 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74707 01:04:35.902 killing process with pid 74707 01:04:35.902 11:01:41 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:04:35.902 11:01:41 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:04:35.902 11:01:41 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74707' 01:04:35.902 11:01:41 app_cmdline -- common/autotest_common.sh@967 -- # kill 74707 01:04:35.902 11:01:41 app_cmdline -- common/autotest_common.sh@972 -- # wait 74707 01:04:36.160 ************************************ 01:04:36.160 END TEST app_cmdline 01:04:36.160 ************************************ 01:04:36.160 01:04:36.160 real 0m1.375s 01:04:36.160 user 0m1.604s 01:04:36.160 sys 0m0.441s 01:04:36.160 11:01:41 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:36.160 11:01:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 01:04:36.419 11:01:41 -- common/autotest_common.sh@1142 -- # return 0 01:04:36.419 11:01:41 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 01:04:36.419 11:01:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:04:36.419 11:01:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:36.419 11:01:41 -- common/autotest_common.sh@10 -- # set +x 01:04:36.419 ************************************ 01:04:36.419 START TEST version 01:04:36.419 ************************************ 01:04:36.419 11:01:41 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 01:04:36.419 * Looking for test storage... 01:04:36.419 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 01:04:36.419 11:01:41 version -- app/version.sh@17 -- # get_header_version major 01:04:36.419 11:01:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 01:04:36.419 11:01:41 version -- app/version.sh@14 -- # cut -f2 01:04:36.419 11:01:41 version -- app/version.sh@14 -- # tr -d '"' 01:04:36.419 11:01:41 version -- app/version.sh@17 -- # major=24 01:04:36.419 11:01:41 version -- app/version.sh@18 -- # get_header_version minor 01:04:36.419 11:01:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 01:04:36.419 11:01:41 version -- app/version.sh@14 -- # tr -d '"' 01:04:36.419 11:01:41 version -- app/version.sh@14 -- # cut -f2 01:04:36.419 11:01:41 version -- app/version.sh@18 -- # minor=9 01:04:36.419 11:01:41 version -- app/version.sh@19 -- # get_header_version patch 01:04:36.419 11:01:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 01:04:36.419 11:01:41 version -- app/version.sh@14 -- # cut -f2 01:04:36.419 11:01:41 version -- app/version.sh@14 -- # tr -d '"' 01:04:36.419 11:01:41 version -- app/version.sh@19 -- # patch=0 01:04:36.419 11:01:41 version -- app/version.sh@20 -- # get_header_version suffix 01:04:36.419 11:01:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 01:04:36.419 11:01:41 version -- app/version.sh@14 -- # cut -f2 01:04:36.419 11:01:41 version -- app/version.sh@14 -- # tr -d '"' 01:04:36.419 11:01:41 version -- app/version.sh@20 -- # suffix=-pre 01:04:36.419 11:01:41 version -- app/version.sh@22 -- # version=24.9 01:04:36.419 11:01:41 version -- app/version.sh@25 -- # (( patch != 0 )) 01:04:36.419 11:01:41 version -- app/version.sh@28 -- # version=24.9rc0 01:04:36.419 11:01:41 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 01:04:36.419 11:01:41 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 01:04:36.419 11:01:41 version -- app/version.sh@30 -- # py_version=24.9rc0 01:04:36.419 11:01:41 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 01:04:36.419 01:04:36.419 real 0m0.212s 01:04:36.419 user 0m0.116s 01:04:36.419 sys 0m0.146s 01:04:36.419 ************************************ 01:04:36.419 END TEST version 01:04:36.419 ************************************ 01:04:36.419 11:01:41 version -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:36.419 11:01:41 version -- common/autotest_common.sh@10 -- # set +x 01:04:36.678 11:01:41 -- common/autotest_common.sh@1142 -- # return 0 01:04:36.678 11:01:41 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 01:04:36.678 11:01:41 -- spdk/autotest.sh@198 -- # uname -s 01:04:36.678 11:01:41 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 01:04:36.678 11:01:41 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 01:04:36.678 11:01:41 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 01:04:36.678 11:01:41 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 01:04:36.678 11:01:41 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 01:04:36.678 11:01:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:04:36.678 11:01:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:36.678 11:01:41 -- common/autotest_common.sh@10 -- # set +x 01:04:36.678 ************************************ 01:04:36.678 START TEST spdk_dd 01:04:36.678 ************************************ 01:04:36.678 11:01:41 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 01:04:36.678 * Looking for test storage... 01:04:36.678 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 01:04:36.678 11:01:41 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:04:36.678 11:01:41 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:04:36.678 11:01:41 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:04:36.678 11:01:41 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:04:36.678 11:01:41 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:36.678 11:01:41 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:36.678 11:01:41 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:36.678 11:01:41 spdk_dd -- paths/export.sh@5 -- # export PATH 01:04:36.678 11:01:41 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:36.678 11:01:41 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:04:37.247 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:04:37.247 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 01:04:37.247 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 01:04:37.247 11:01:42 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 01:04:37.247 11:01:42 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@310 -- # local nvmes 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@295 -- # local bdf= 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@230 -- # local class 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@231 -- # local subclass 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@232 -- # local progif 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@233 -- # class=01 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@234 -- # subclass=08 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@235 -- # progif=02 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@237 -- # hash lspci 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@15 -- # local i 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@24 -- # return 0 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@15 -- # local i 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@24 -- # return 0 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@320 -- # uname -s 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 01:04:37.247 11:01:42 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 01:04:37.520 11:01:42 spdk_dd -- scripts/common.sh@320 -- # uname -s 01:04:37.520 11:01:42 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 01:04:37.520 11:01:42 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 01:04:37.520 11:01:42 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 01:04:37.520 11:01:42 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 01:04:37.520 11:01:42 spdk_dd -- dd/dd.sh@13 -- # check_liburing 01:04:37.520 11:01:42 spdk_dd -- dd/common.sh@139 -- # local lib 01:04:37.520 11:01:42 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 01:04:37.520 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.520 11:01:42 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:04:37.520 11:01:42 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 01:04:37.520 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 01:04:37.520 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.520 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 01:04:37.520 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.520 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 01:04:37.520 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.520 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 01:04:37.520 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.520 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 01:04:37.520 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.520 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 01:04:37.520 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.520 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 01:04:37.520 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.520 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 01:04:37.520 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.520 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 01:04:37.520 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.520 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 01:04:37.520 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.1 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.16.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 01:04:37.521 * spdk_dd linked to liburing 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 01:04:37.521 11:01:42 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 01:04:37.521 11:01:42 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 01:04:37.521 11:01:42 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 01:04:37.521 11:01:42 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 01:04:37.522 11:01:42 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 01:04:37.522 11:01:42 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 01:04:37.522 11:01:42 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 01:04:37.522 11:01:42 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 01:04:37.522 11:01:42 spdk_dd -- dd/common.sh@153 -- # return 0 01:04:37.522 11:01:42 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 01:04:37.522 11:01:42 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 01:04:37.522 11:01:42 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 01:04:37.522 11:01:42 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:37.522 11:01:42 spdk_dd -- common/autotest_common.sh@10 -- # set +x 01:04:37.522 ************************************ 01:04:37.522 START TEST spdk_dd_basic_rw 01:04:37.522 ************************************ 01:04:37.522 11:01:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 01:04:37.522 * Looking for test storage... 01:04:37.522 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 01:04:37.522 11:01:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:04:37.522 11:01:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:04:37.522 11:01:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:04:37.522 11:01:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:04:37.522 11:01:42 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:37.522 11:01:42 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:37.522 11:01:42 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:37.522 11:01:42 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 01:04:37.522 11:01:42 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:37.522 11:01:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 01:04:37.522 11:01:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 01:04:37.522 11:01:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 01:04:37.522 11:01:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 01:04:37.522 11:01:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 01:04:37.522 11:01:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 01:04:37.522 11:01:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 01:04:37.522 11:01:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:04:37.522 11:01:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:04:37.522 11:01:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 01:04:37.522 11:01:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 01:04:37.522 11:01:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 01:04:37.522 11:01:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 01:04:37.784 11:01:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 01:04:37.784 11:01:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 01:04:37.785 11:01:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 01:04:37.785 11:01:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 01:04:37.785 11:01:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 01:04:37.785 11:01:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 01:04:37.785 11:01:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 01:04:37.785 11:01:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 01:04:37.785 11:01:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 01:04:37.785 11:01:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 01:04:37.785 11:01:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 01:04:37.785 11:01:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:37.785 11:01:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 01:04:37.785 11:01:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 01:04:37.785 ************************************ 01:04:37.785 START TEST dd_bs_lt_native_bs 01:04:37.785 ************************************ 01:04:37.785 11:01:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 01:04:37.785 11:01:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 01:04:37.785 11:01:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 01:04:37.785 11:01:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:04:37.785 11:01:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:04:37.785 11:01:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:04:37.785 11:01:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:04:37.785 11:01:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:04:37.785 11:01:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:04:37.785 11:01:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:04:37.785 11:01:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:04:37.785 11:01:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 01:04:37.785 { 01:04:37.785 "subsystems": [ 01:04:37.785 { 01:04:37.785 "subsystem": "bdev", 01:04:37.785 "config": [ 01:04:37.785 { 01:04:37.785 "params": { 01:04:37.785 "trtype": "pcie", 01:04:37.785 "traddr": "0000:00:10.0", 01:04:37.785 "name": "Nvme0" 01:04:37.785 }, 01:04:37.785 "method": "bdev_nvme_attach_controller" 01:04:37.785 }, 01:04:37.785 { 01:04:37.785 "method": "bdev_wait_for_examine" 01:04:37.785 } 01:04:37.785 ] 01:04:37.785 } 01:04:37.785 ] 01:04:37.785 } 01:04:37.785 [2024-07-22 11:01:42.955633] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:37.785 [2024-07-22 11:01:42.955709] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75022 ] 01:04:38.042 [2024-07-22 11:01:43.095789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:38.042 [2024-07-22 11:01:43.144598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:38.042 [2024-07-22 11:01:43.186281] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:38.299 [2024-07-22 11:01:43.279021] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 01:04:38.299 [2024-07-22 11:01:43.279084] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:04:38.299 [2024-07-22 11:01:43.376904] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 01:04:38.299 ************************************ 01:04:38.299 END TEST dd_bs_lt_native_bs 01:04:38.299 ************************************ 01:04:38.300 11:01:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 01:04:38.300 11:01:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:04:38.300 11:01:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 01:04:38.300 11:01:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 01:04:38.300 11:01:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 01:04:38.300 11:01:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:04:38.300 01:04:38.300 real 0m0.560s 01:04:38.300 user 0m0.364s 01:04:38.300 sys 0m0.148s 01:04:38.300 11:01:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:38.300 11:01:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 01:04:38.557 11:01:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 01:04:38.557 11:01:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 01:04:38.557 11:01:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:04:38.557 11:01:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:38.557 11:01:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 01:04:38.557 ************************************ 01:04:38.557 START TEST dd_rw 01:04:38.557 ************************************ 01:04:38.557 11:01:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 01:04:38.557 11:01:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 01:04:38.557 11:01:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 01:04:38.557 11:01:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 01:04:38.557 11:01:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 01:04:38.557 11:01:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 01:04:38.557 11:01:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 01:04:38.557 11:01:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 01:04:38.557 11:01:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 01:04:38.557 11:01:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 01:04:38.557 11:01:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 01:04:38.557 11:01:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 01:04:38.557 11:01:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 01:04:38.557 11:01:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 01:04:38.557 11:01:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 01:04:38.557 11:01:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 01:04:38.557 11:01:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 01:04:38.557 11:01:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 01:04:38.557 11:01:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:04:39.123 11:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 01:04:39.123 11:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 01:04:39.123 11:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:04:39.123 11:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:04:39.123 [2024-07-22 11:01:44.080727] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:39.123 [2024-07-22 11:01:44.080996] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75053 ] 01:04:39.123 { 01:04:39.123 "subsystems": [ 01:04:39.123 { 01:04:39.123 "subsystem": "bdev", 01:04:39.123 "config": [ 01:04:39.123 { 01:04:39.123 "params": { 01:04:39.123 "trtype": "pcie", 01:04:39.123 "traddr": "0000:00:10.0", 01:04:39.123 "name": "Nvme0" 01:04:39.123 }, 01:04:39.123 "method": "bdev_nvme_attach_controller" 01:04:39.123 }, 01:04:39.123 { 01:04:39.123 "method": "bdev_wait_for_examine" 01:04:39.123 } 01:04:39.123 ] 01:04:39.123 } 01:04:39.123 ] 01:04:39.123 } 01:04:39.123 [2024-07-22 11:01:44.221815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:39.123 [2024-07-22 11:01:44.270877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:39.123 [2024-07-22 11:01:44.312446] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:39.384  Copying: 60/60 [kB] (average 19 MBps) 01:04:39.384 01:04:39.642 11:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 01:04:39.642 11:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 01:04:39.642 11:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:04:39.642 11:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:04:39.642 [2024-07-22 11:01:44.639908] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:39.642 [2024-07-22 11:01:44.640152] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75066 ] 01:04:39.642 { 01:04:39.642 "subsystems": [ 01:04:39.642 { 01:04:39.642 "subsystem": "bdev", 01:04:39.642 "config": [ 01:04:39.642 { 01:04:39.642 "params": { 01:04:39.642 "trtype": "pcie", 01:04:39.642 "traddr": "0000:00:10.0", 01:04:39.642 "name": "Nvme0" 01:04:39.642 }, 01:04:39.642 "method": "bdev_nvme_attach_controller" 01:04:39.642 }, 01:04:39.642 { 01:04:39.642 "method": "bdev_wait_for_examine" 01:04:39.642 } 01:04:39.642 ] 01:04:39.642 } 01:04:39.642 ] 01:04:39.642 } 01:04:39.642 [2024-07-22 11:01:44.780595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:39.642 [2024-07-22 11:01:44.829351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:39.900 [2024-07-22 11:01:44.871474] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:40.159  Copying: 60/60 [kB] (average 14 MBps) 01:04:40.159 01:04:40.159 11:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:04:40.159 11:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 01:04:40.159 11:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 01:04:40.159 11:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 01:04:40.159 11:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 01:04:40.159 11:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 01:04:40.159 11:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 01:04:40.159 11:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 01:04:40.159 11:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 01:04:40.159 11:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:04:40.159 11:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:04:40.159 [2024-07-22 11:01:45.198796] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:40.159 [2024-07-22 11:01:45.199064] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75082 ] 01:04:40.159 { 01:04:40.159 "subsystems": [ 01:04:40.159 { 01:04:40.159 "subsystem": "bdev", 01:04:40.159 "config": [ 01:04:40.159 { 01:04:40.159 "params": { 01:04:40.159 "trtype": "pcie", 01:04:40.159 "traddr": "0000:00:10.0", 01:04:40.159 "name": "Nvme0" 01:04:40.159 }, 01:04:40.159 "method": "bdev_nvme_attach_controller" 01:04:40.159 }, 01:04:40.159 { 01:04:40.159 "method": "bdev_wait_for_examine" 01:04:40.159 } 01:04:40.159 ] 01:04:40.159 } 01:04:40.159 ] 01:04:40.159 } 01:04:40.159 [2024-07-22 11:01:45.338955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:40.418 [2024-07-22 11:01:45.383052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:40.418 [2024-07-22 11:01:45.424857] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:40.676  Copying: 1024/1024 [kB] (average 500 MBps) 01:04:40.676 01:04:40.676 11:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 01:04:40.676 11:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 01:04:40.676 11:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 01:04:40.676 11:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 01:04:40.676 11:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 01:04:40.676 11:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 01:04:40.676 11:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:04:41.243 11:01:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 01:04:41.243 11:01:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 01:04:41.244 11:01:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:04:41.244 11:01:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:04:41.244 [2024-07-22 11:01:46.242188] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:41.244 [2024-07-22 11:01:46.242257] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75101 ] 01:04:41.244 { 01:04:41.244 "subsystems": [ 01:04:41.244 { 01:04:41.244 "subsystem": "bdev", 01:04:41.244 "config": [ 01:04:41.244 { 01:04:41.244 "params": { 01:04:41.244 "trtype": "pcie", 01:04:41.244 "traddr": "0000:00:10.0", 01:04:41.244 "name": "Nvme0" 01:04:41.244 }, 01:04:41.244 "method": "bdev_nvme_attach_controller" 01:04:41.244 }, 01:04:41.244 { 01:04:41.244 "method": "bdev_wait_for_examine" 01:04:41.244 } 01:04:41.244 ] 01:04:41.244 } 01:04:41.244 ] 01:04:41.244 } 01:04:41.244 [2024-07-22 11:01:46.383835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:41.244 [2024-07-22 11:01:46.429072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:41.502 [2024-07-22 11:01:46.470572] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:41.759  Copying: 60/60 [kB] (average 58 MBps) 01:04:41.759 01:04:41.759 11:01:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 01:04:41.759 11:01:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 01:04:41.759 11:01:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:04:41.759 11:01:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:04:41.759 [2024-07-22 11:01:46.790463] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:41.759 [2024-07-22 11:01:46.790539] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75114 ] 01:04:41.759 { 01:04:41.759 "subsystems": [ 01:04:41.759 { 01:04:41.759 "subsystem": "bdev", 01:04:41.759 "config": [ 01:04:41.759 { 01:04:41.759 "params": { 01:04:41.759 "trtype": "pcie", 01:04:41.759 "traddr": "0000:00:10.0", 01:04:41.759 "name": "Nvme0" 01:04:41.759 }, 01:04:41.759 "method": "bdev_nvme_attach_controller" 01:04:41.759 }, 01:04:41.759 { 01:04:41.759 "method": "bdev_wait_for_examine" 01:04:41.759 } 01:04:41.759 ] 01:04:41.759 } 01:04:41.759 ] 01:04:41.759 } 01:04:41.759 [2024-07-22 11:01:46.932321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:42.016 [2024-07-22 11:01:46.978281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:42.016 [2024-07-22 11:01:47.020341] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:42.273  Copying: 60/60 [kB] (average 58 MBps) 01:04:42.273 01:04:42.273 11:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:04:42.273 11:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 01:04:42.273 11:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 01:04:42.273 11:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 01:04:42.273 11:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 01:04:42.273 11:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 01:04:42.273 11:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 01:04:42.273 11:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 01:04:42.273 11:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 01:04:42.273 11:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:04:42.273 11:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:04:42.273 [2024-07-22 11:01:47.347325] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:42.273 [2024-07-22 11:01:47.347776] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75130 ] 01:04:42.273 { 01:04:42.273 "subsystems": [ 01:04:42.273 { 01:04:42.273 "subsystem": "bdev", 01:04:42.273 "config": [ 01:04:42.273 { 01:04:42.273 "params": { 01:04:42.273 "trtype": "pcie", 01:04:42.273 "traddr": "0000:00:10.0", 01:04:42.273 "name": "Nvme0" 01:04:42.273 }, 01:04:42.273 "method": "bdev_nvme_attach_controller" 01:04:42.273 }, 01:04:42.273 { 01:04:42.273 "method": "bdev_wait_for_examine" 01:04:42.273 } 01:04:42.273 ] 01:04:42.273 } 01:04:42.273 ] 01:04:42.273 } 01:04:42.532 [2024-07-22 11:01:47.489969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:42.532 [2024-07-22 11:01:47.534417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:42.532 [2024-07-22 11:01:47.575868] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:42.790  Copying: 1024/1024 [kB] (average 500 MBps) 01:04:42.790 01:04:42.790 11:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 01:04:42.790 11:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 01:04:42.790 11:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 01:04:42.790 11:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 01:04:42.790 11:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 01:04:42.790 11:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 01:04:42.790 11:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 01:04:42.790 11:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:04:43.354 11:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 01:04:43.354 11:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 01:04:43.354 11:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:04:43.354 11:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:04:43.354 [2024-07-22 11:01:48.352633] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:43.354 [2024-07-22 11:01:48.352899] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75149 ] 01:04:43.354 { 01:04:43.354 "subsystems": [ 01:04:43.354 { 01:04:43.354 "subsystem": "bdev", 01:04:43.354 "config": [ 01:04:43.354 { 01:04:43.354 "params": { 01:04:43.354 "trtype": "pcie", 01:04:43.354 "traddr": "0000:00:10.0", 01:04:43.354 "name": "Nvme0" 01:04:43.354 }, 01:04:43.354 "method": "bdev_nvme_attach_controller" 01:04:43.354 }, 01:04:43.354 { 01:04:43.354 "method": "bdev_wait_for_examine" 01:04:43.354 } 01:04:43.354 ] 01:04:43.354 } 01:04:43.354 ] 01:04:43.354 } 01:04:43.354 [2024-07-22 11:01:48.494296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:43.354 [2024-07-22 11:01:48.541065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:43.611 [2024-07-22 11:01:48.582325] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:43.869  Copying: 56/56 [kB] (average 27 MBps) 01:04:43.869 01:04:43.869 11:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 01:04:43.869 11:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 01:04:43.869 11:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:04:43.869 11:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:04:43.869 [2024-07-22 11:01:48.897255] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:43.869 [2024-07-22 11:01:48.897324] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75162 ] 01:04:43.869 { 01:04:43.869 "subsystems": [ 01:04:43.869 { 01:04:43.869 "subsystem": "bdev", 01:04:43.869 "config": [ 01:04:43.869 { 01:04:43.869 "params": { 01:04:43.869 "trtype": "pcie", 01:04:43.869 "traddr": "0000:00:10.0", 01:04:43.869 "name": "Nvme0" 01:04:43.869 }, 01:04:43.869 "method": "bdev_nvme_attach_controller" 01:04:43.869 }, 01:04:43.869 { 01:04:43.869 "method": "bdev_wait_for_examine" 01:04:43.869 } 01:04:43.869 ] 01:04:43.869 } 01:04:43.869 ] 01:04:43.869 } 01:04:43.869 [2024-07-22 11:01:49.037306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:44.127 [2024-07-22 11:01:49.079578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:44.127 [2024-07-22 11:01:49.120732] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:44.385  Copying: 56/56 [kB] (average 27 MBps) 01:04:44.385 01:04:44.385 11:01:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:04:44.385 11:01:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 01:04:44.385 11:01:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 01:04:44.385 11:01:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 01:04:44.385 11:01:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 01:04:44.385 11:01:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 01:04:44.385 11:01:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 01:04:44.385 11:01:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 01:04:44.385 11:01:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 01:04:44.385 11:01:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:04:44.385 11:01:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:04:44.385 { 01:04:44.385 "subsystems": [ 01:04:44.385 { 01:04:44.385 "subsystem": "bdev", 01:04:44.385 "config": [ 01:04:44.385 { 01:04:44.385 "params": { 01:04:44.385 "trtype": "pcie", 01:04:44.385 "traddr": "0000:00:10.0", 01:04:44.385 "name": "Nvme0" 01:04:44.385 }, 01:04:44.385 "method": "bdev_nvme_attach_controller" 01:04:44.385 }, 01:04:44.385 { 01:04:44.385 "method": "bdev_wait_for_examine" 01:04:44.385 } 01:04:44.385 ] 01:04:44.385 } 01:04:44.385 ] 01:04:44.385 } 01:04:44.385 [2024-07-22 11:01:49.442458] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:44.385 [2024-07-22 11:01:49.442527] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75178 ] 01:04:44.385 [2024-07-22 11:01:49.583990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:44.643 [2024-07-22 11:01:49.626998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:44.643 [2024-07-22 11:01:49.668222] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:44.901  Copying: 1024/1024 [kB] (average 1000 MBps) 01:04:44.901 01:04:44.901 11:01:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 01:04:44.901 11:01:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 01:04:44.901 11:01:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 01:04:44.901 11:01:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 01:04:44.901 11:01:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 01:04:44.901 11:01:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 01:04:44.901 11:01:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:04:45.475 11:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 01:04:45.475 11:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 01:04:45.475 11:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:04:45.475 11:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:04:45.475 [2024-07-22 11:01:50.458373] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:45.475 [2024-07-22 11:01:50.458440] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75197 ] 01:04:45.475 { 01:04:45.475 "subsystems": [ 01:04:45.475 { 01:04:45.475 "subsystem": "bdev", 01:04:45.475 "config": [ 01:04:45.475 { 01:04:45.475 "params": { 01:04:45.475 "trtype": "pcie", 01:04:45.475 "traddr": "0000:00:10.0", 01:04:45.475 "name": "Nvme0" 01:04:45.475 }, 01:04:45.475 "method": "bdev_nvme_attach_controller" 01:04:45.475 }, 01:04:45.475 { 01:04:45.475 "method": "bdev_wait_for_examine" 01:04:45.475 } 01:04:45.475 ] 01:04:45.475 } 01:04:45.475 ] 01:04:45.475 } 01:04:45.475 [2024-07-22 11:01:50.601607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:45.475 [2024-07-22 11:01:50.647318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:45.732 [2024-07-22 11:01:50.689031] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:45.991  Copying: 56/56 [kB] (average 54 MBps) 01:04:45.991 01:04:45.991 11:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 01:04:45.991 11:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 01:04:45.991 11:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:04:45.991 11:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:04:45.991 [2024-07-22 11:01:51.012300] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:45.991 [2024-07-22 11:01:51.012367] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75210 ] 01:04:45.991 { 01:04:45.991 "subsystems": [ 01:04:45.991 { 01:04:45.991 "subsystem": "bdev", 01:04:45.991 "config": [ 01:04:45.991 { 01:04:45.991 "params": { 01:04:45.991 "trtype": "pcie", 01:04:45.991 "traddr": "0000:00:10.0", 01:04:45.991 "name": "Nvme0" 01:04:45.991 }, 01:04:45.991 "method": "bdev_nvme_attach_controller" 01:04:45.991 }, 01:04:45.991 { 01:04:45.991 "method": "bdev_wait_for_examine" 01:04:45.991 } 01:04:45.991 ] 01:04:45.991 } 01:04:45.991 ] 01:04:45.991 } 01:04:45.991 [2024-07-22 11:01:51.153606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:46.250 [2024-07-22 11:01:51.198312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:46.250 [2024-07-22 11:01:51.239856] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:46.509  Copying: 56/56 [kB] (average 54 MBps) 01:04:46.509 01:04:46.509 11:01:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:04:46.509 11:01:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 01:04:46.509 11:01:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 01:04:46.509 11:01:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 01:04:46.509 11:01:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 01:04:46.509 11:01:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 01:04:46.509 11:01:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 01:04:46.509 11:01:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 01:04:46.509 11:01:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 01:04:46.509 11:01:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:04:46.509 11:01:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:04:46.509 { 01:04:46.509 "subsystems": [ 01:04:46.509 { 01:04:46.509 "subsystem": "bdev", 01:04:46.509 "config": [ 01:04:46.509 { 01:04:46.509 "params": { 01:04:46.509 "trtype": "pcie", 01:04:46.509 "traddr": "0000:00:10.0", 01:04:46.509 "name": "Nvme0" 01:04:46.509 }, 01:04:46.509 "method": "bdev_nvme_attach_controller" 01:04:46.509 }, 01:04:46.509 { 01:04:46.509 "method": "bdev_wait_for_examine" 01:04:46.509 } 01:04:46.509 ] 01:04:46.509 } 01:04:46.509 ] 01:04:46.509 } 01:04:46.509 [2024-07-22 11:01:51.555886] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:46.509 [2024-07-22 11:01:51.555952] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75226 ] 01:04:46.509 [2024-07-22 11:01:51.694348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:46.768 [2024-07-22 11:01:51.739252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:46.768 [2024-07-22 11:01:51.780554] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:47.026  Copying: 1024/1024 [kB] (average 1000 MBps) 01:04:47.026 01:04:47.026 11:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 01:04:47.026 11:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 01:04:47.026 11:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 01:04:47.026 11:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 01:04:47.026 11:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 01:04:47.026 11:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 01:04:47.026 11:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 01:04:47.026 11:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:04:47.283 11:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 01:04:47.283 11:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 01:04:47.283 11:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:04:47.283 11:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:04:47.540 [2024-07-22 11:01:52.489568] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:47.540 [2024-07-22 11:01:52.489631] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75245 ] 01:04:47.540 { 01:04:47.540 "subsystems": [ 01:04:47.540 { 01:04:47.540 "subsystem": "bdev", 01:04:47.540 "config": [ 01:04:47.540 { 01:04:47.540 "params": { 01:04:47.540 "trtype": "pcie", 01:04:47.540 "traddr": "0000:00:10.0", 01:04:47.540 "name": "Nvme0" 01:04:47.540 }, 01:04:47.540 "method": "bdev_nvme_attach_controller" 01:04:47.541 }, 01:04:47.541 { 01:04:47.541 "method": "bdev_wait_for_examine" 01:04:47.541 } 01:04:47.541 ] 01:04:47.541 } 01:04:47.541 ] 01:04:47.541 } 01:04:47.541 [2024-07-22 11:01:52.630046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:47.541 [2024-07-22 11:01:52.673462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:47.541 [2024-07-22 11:01:52.715305] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:47.798  Copying: 48/48 [kB] (average 46 MBps) 01:04:47.798 01:04:47.798 11:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 01:04:47.798 11:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 01:04:47.798 11:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:04:47.798 11:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:04:48.056 [2024-07-22 11:01:53.026734] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:48.056 [2024-07-22 11:01:53.026799] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75258 ] 01:04:48.056 { 01:04:48.056 "subsystems": [ 01:04:48.056 { 01:04:48.056 "subsystem": "bdev", 01:04:48.056 "config": [ 01:04:48.056 { 01:04:48.056 "params": { 01:04:48.056 "trtype": "pcie", 01:04:48.056 "traddr": "0000:00:10.0", 01:04:48.056 "name": "Nvme0" 01:04:48.056 }, 01:04:48.056 "method": "bdev_nvme_attach_controller" 01:04:48.056 }, 01:04:48.056 { 01:04:48.056 "method": "bdev_wait_for_examine" 01:04:48.056 } 01:04:48.056 ] 01:04:48.056 } 01:04:48.056 ] 01:04:48.056 } 01:04:48.056 [2024-07-22 11:01:53.167093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:48.056 [2024-07-22 11:01:53.211403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:48.056 [2024-07-22 11:01:53.252897] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:48.314  Copying: 48/48 [kB] (average 23 MBps) 01:04:48.314 01:04:48.314 11:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:04:48.573 11:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 01:04:48.573 11:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 01:04:48.573 11:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 01:04:48.573 11:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 01:04:48.573 11:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 01:04:48.573 11:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 01:04:48.573 11:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 01:04:48.573 11:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 01:04:48.573 11:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:04:48.573 11:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:04:48.573 [2024-07-22 11:01:53.563373] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:48.573 [2024-07-22 11:01:53.563711] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75274 ] 01:04:48.573 { 01:04:48.573 "subsystems": [ 01:04:48.573 { 01:04:48.573 "subsystem": "bdev", 01:04:48.573 "config": [ 01:04:48.573 { 01:04:48.573 "params": { 01:04:48.573 "trtype": "pcie", 01:04:48.573 "traddr": "0000:00:10.0", 01:04:48.573 "name": "Nvme0" 01:04:48.573 }, 01:04:48.573 "method": "bdev_nvme_attach_controller" 01:04:48.573 }, 01:04:48.573 { 01:04:48.573 "method": "bdev_wait_for_examine" 01:04:48.573 } 01:04:48.573 ] 01:04:48.573 } 01:04:48.573 ] 01:04:48.573 } 01:04:48.573 [2024-07-22 11:01:53.704500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:48.573 [2024-07-22 11:01:53.747032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:48.832 [2024-07-22 11:01:53.788853] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:49.090  Copying: 1024/1024 [kB] (average 500 MBps) 01:04:49.090 01:04:49.090 11:01:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 01:04:49.090 11:01:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 01:04:49.090 11:01:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 01:04:49.090 11:01:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 01:04:49.090 11:01:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 01:04:49.090 11:01:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 01:04:49.090 11:01:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:04:49.348 11:01:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 01:04:49.348 11:01:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 01:04:49.348 11:01:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:04:49.348 11:01:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:04:49.348 [2024-07-22 11:01:54.497202] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:49.348 [2024-07-22 11:01:54.497409] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75293 ] 01:04:49.348 { 01:04:49.348 "subsystems": [ 01:04:49.348 { 01:04:49.348 "subsystem": "bdev", 01:04:49.348 "config": [ 01:04:49.348 { 01:04:49.348 "params": { 01:04:49.348 "trtype": "pcie", 01:04:49.348 "traddr": "0000:00:10.0", 01:04:49.348 "name": "Nvme0" 01:04:49.348 }, 01:04:49.348 "method": "bdev_nvme_attach_controller" 01:04:49.348 }, 01:04:49.348 { 01:04:49.348 "method": "bdev_wait_for_examine" 01:04:49.348 } 01:04:49.348 ] 01:04:49.348 } 01:04:49.348 ] 01:04:49.348 } 01:04:49.607 [2024-07-22 11:01:54.639835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:49.607 [2024-07-22 11:01:54.684445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:49.607 [2024-07-22 11:01:54.726272] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:49.865  Copying: 48/48 [kB] (average 46 MBps) 01:04:49.865 01:04:49.865 11:01:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 01:04:49.865 11:01:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 01:04:49.865 11:01:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:04:49.865 11:01:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:04:49.865 { 01:04:49.865 "subsystems": [ 01:04:49.865 { 01:04:49.865 "subsystem": "bdev", 01:04:49.865 "config": [ 01:04:49.865 { 01:04:49.865 "params": { 01:04:49.865 "trtype": "pcie", 01:04:49.865 "traddr": "0000:00:10.0", 01:04:49.865 "name": "Nvme0" 01:04:49.865 }, 01:04:49.865 "method": "bdev_nvme_attach_controller" 01:04:49.865 }, 01:04:49.865 { 01:04:49.865 "method": "bdev_wait_for_examine" 01:04:49.865 } 01:04:49.865 ] 01:04:49.865 } 01:04:49.865 ] 01:04:49.865 } 01:04:49.865 [2024-07-22 11:01:55.040756] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:49.865 [2024-07-22 11:01:55.040825] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75306 ] 01:04:50.123 [2024-07-22 11:01:55.184343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:50.123 [2024-07-22 11:01:55.229196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:50.124 [2024-07-22 11:01:55.270514] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:50.383  Copying: 48/48 [kB] (average 46 MBps) 01:04:50.383 01:04:50.383 11:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:04:50.383 11:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 01:04:50.383 11:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 01:04:50.383 11:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 01:04:50.383 11:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 01:04:50.383 11:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 01:04:50.383 11:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 01:04:50.383 11:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 01:04:50.383 11:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 01:04:50.383 11:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:04:50.383 11:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:04:50.383 { 01:04:50.383 "subsystems": [ 01:04:50.383 { 01:04:50.383 "subsystem": "bdev", 01:04:50.383 "config": [ 01:04:50.383 { 01:04:50.383 "params": { 01:04:50.383 "trtype": "pcie", 01:04:50.383 "traddr": "0000:00:10.0", 01:04:50.383 "name": "Nvme0" 01:04:50.383 }, 01:04:50.383 "method": "bdev_nvme_attach_controller" 01:04:50.383 }, 01:04:50.383 { 01:04:50.383 "method": "bdev_wait_for_examine" 01:04:50.383 } 01:04:50.383 ] 01:04:50.383 } 01:04:50.383 ] 01:04:50.383 } 01:04:50.642 [2024-07-22 11:01:55.591873] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:50.642 [2024-07-22 11:01:55.591943] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75322 ] 01:04:50.642 [2024-07-22 11:01:55.732431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:50.642 [2024-07-22 11:01:55.776021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:50.642 [2024-07-22 11:01:55.817412] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:50.901  Copying: 1024/1024 [kB] (average 1000 MBps) 01:04:50.901 01:04:50.901 01:04:50.901 real 0m12.548s 01:04:50.901 user 0m8.704s 01:04:50.901 sys 0m4.831s 01:04:50.901 11:01:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:50.901 11:01:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:04:50.901 ************************************ 01:04:50.901 END TEST dd_rw 01:04:50.901 ************************************ 01:04:51.160 11:01:56 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 01:04:51.160 11:01:56 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 01:04:51.160 11:01:56 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:04:51.160 11:01:56 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:51.160 11:01:56 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 01:04:51.160 ************************************ 01:04:51.160 START TEST dd_rw_offset 01:04:51.160 ************************************ 01:04:51.160 11:01:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 01:04:51.160 11:01:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 01:04:51.160 11:01:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 01:04:51.160 11:01:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 01:04:51.160 11:01:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 01:04:51.160 11:01:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 01:04:51.160 11:01:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=qx4q0n3g0u4l3h278ojw7vh5fu85vx906bpckf24gz8sjwu3ws2ung742scscan7tuwoaitc5n4mp3qqlnbzbbjwl8snj6yffsdk7bog3wevdec6fjd9j3uhbq5wzdo9ezu192e3jhc2wmzriul7lwj9ba821ug8u0vm5oqpuvzqt7lw3qpkg1g62qzzwfefq290zyn7bmbkxh66ve0p7kaaa8uk1j21dxubjy288h196bi0yfngubwpymizze9q6bmjrea20yocsnnd3slij94adbfaie3jt3y85m6r2mew7vrozb83ykdmumztgxzvz4jz0aly4o1tqfj60u7eyquoprf1w2bsvcrfxvuy8sy13pd2v8jrdohtdwtfwyqg7o5nvv5wd80pd0in4r5hr0zowj12mljl0q3je6c31z9ah4in13wysjv6k9fwziu95halk9g3tunh0xqbx9v5wtmqhx3fr6ywxiantszpulq9bp124tbezh6jjaxeaukjf21221cplnyod1wggdmc3rgkx1btzte5ku6frnrsap8jbupfhfecs6ckbhrecpyv3wtw79ved6mvpaj4jswbmmhk75qepb7u7k8o0pultymnoaeqg7m4oomqj0hsuum6i1alntrek06t62a4j598sr83658vxg9f95vj8yvo3211mxl2uqsu6ly9sw56yj66x1j1tx4uola1n1xbyaf6wazb12t3ajlkdjlmpkkp5rx26leqtcxx0fpqb9h7my2g4w1g4da7if9uw2b8hyp364jwnp6gns67uhqep9ya30wzrm1x03loepc6q6m4iytettx3qdepzjkhyv0c86mzhmfxoj8zgopa4loc4s1scdtpar1a4lugzudnz5m0yvf9zh7ahu9l0fk4687j7k6cft1y5l44tjbcazlmp3pasvwi400ea88pkmapse0j842pxaa30oi0jr2sdoz0iy1ievg62h8txmsnm6h62tsaqmtui6y4v3pqlgcqh435m9mk2sz7jtf6dfnlz148i8a3vz5imfohy3d9pye5413uitbcvqgckasaz8wksz1h07cy4f6f72v3qithjhhsvdcyoi3yh7dybmbiym6e4a1z02s4hn1p5brx0aw8jgnhe2rl7h142p0toiag2ly81ci118xvc9w6yb66eqzqtntyphcdtvrpv0yfs9fm1edim2bpnn5gcmarumk91vn31tq1lrvq6arylwaoxrjibwbxuj11y2xs1uanssmbg08jaiiyxgj3fv7w1n2h55yl9d9mpucdudbqk67bp436at2zsbdg6u38mn0cvtj7pqh962tb7hpnhu0b4dnh7mk5ee4303gkzlxsurwqctexh3ywr51m5i8ewkb02eqz8myps35gxnrnj5g6m6lz10ib6wfxjrzq3zjjzupw4owq4j2knuybw8sr3f26ieohnlg4yodfj6nub0av2cednv1ozhonjaecueoymh0f5i8xwza6wqnn1risbr13k5fdmz9xd6m0sa2qd81x891hczxez8iuun15nfh9w63uface0xnj27gc6e9m304f5pxr62sr2z0a6b05p7d29nlth6l6k3z3l5ed2h0z7u2asyj8y4s3uq7m1gefpv6a1ze4a21hfxfnyl703cjmjb6hghglq3270bci0fyonrl8iuxekkavfyt8giyoe4gxeji7t3jj35r5xwcztn6xy9hzk9cm891hix2otspaxvkpzmu03dwtpcede9atx5n1us9l3xth8ccl6sdygb7034a6kxsx7w76avws86fbom71qyktub41d29i445v3deaihn7v8ljklib7o54vtt8yu1iv5njhomz6jyyjnmgcfomfahskemypm6f0opcadxewu7lfc59u6b4qtu4r1w4i23qyqo3eivuart541v1x6ebfe7ird6nhwb5vs7b987i2up1b9yn90sw5dotmniszeik8gsejqqnb0tt8eogssrq29uq62djj15myxas60apiq1z4d967otzr5dew2b4unxiddgk1pwwgcmcb5bhq1tux4hnmpwl9ofnbv3pc14qkmt1f2fzjk1atckfxn2moliub9cqqpn94kp7f5lh70cchmuuhw5tkdy1zub6a39aexp9ql4h28q7kmisv0ok6cs1v6zc8r5y7ugzs7mso6mtwzv7loivj08av1a00ia4jztnrgw0rrmb1yua8alnmko4uj0nt5uxdnqaud4jwkz2blx53aky10m9pv0r6vxe9rsgo8l6vmiwvo1zkeutkg5ocugp1ekih7xez4b5xdnx5alqq983rw3lkp5617qkzyl8p7fa53zd2vn3xhwaujhm0llcqsuevmwomrjuidwz7p1a06z02bu2z9i5i5rpf2an333c2vg41xrrhls6w3l2s8g3u2ta9tnzalslvkdp2dcz75w3nvfznghn7jkb88k7nd10w5o9k8m3jodpfiaazbggpvg0oaz98a9p7qfmf7bda249312570h1snqiw1flf3b5j478f3xe3jo5tgxg1ou4zdscdicgx07og0skzuayre5xjjkg9lfr4zsyvzhp8o2ctv0n2km8fywro48h1yanu7nwldy0mzpb312e6imzps3z2fb3heb2trt90nqim5l5lrcexz0mgsucapwb4tqcscjdysvh2u1tqfrbx4oi9k5pfkdj4kqqfwb5nk2sldlxfl6krdhlkt4a0z17b685u0fn7a84bcvy3z2rvjx90gdovx6dqbfeh0qer2ccbn23007chu58wpimhhfyfodp47a1psy5ppf6ou0nlaj6982kmqjclcvgirrtqdu5uc30dxbn4t2036ajuau2klfwbpjhpu3jdlvatp14uqf0e2ojy8uvo95rtzeogblqyq9eklx5ag2lc5jh6yioavrjz2x918mgpu0khaz84018cbeainjnff0x871zxheujsl62rk75ngdp5yr4y0z9i06f8gzm8ihphe3cadti0524ex10a8gulijh9wglkbh2l75u2xeay00bw0cnbe0utaftahhj11367e9p12m29ggwrejgz18e03m9qi0rwrewgb4ftpwmakn7gcewqgi4hwvhtmmrbpy4ccxwtccjy2zomea989483uabqjc172yv9rhs7ew1sggo3el0c788ul0nzwpip043ssyfrbckalxwtbygy8zncp6ihliukkv0bbik88onq3icihkam7zeeeu756mgb2ukx7l3sivwmnf7gto3sd2otaiunnhixsfn0rkgyxa64gyjtnoqa1ougnejnkar2112oslm5njmm7l15y8nugl0pp1kcop5i96ha4qz6m4de8e5048xsylshr4p0eqy5h9ziw5730je7im7a97jyybntwt4h6qqah7ij5neiib37jgq6qyvnjcem1azxxwdxks7n0cgqxrs17e3j6fwtiz4srg0v2qwh0zkpkxw8si44cn1xtomsh1ztdfoedpitumqhe5f43w3m6kyqiscagpootz7hau8ng3earbw0f7kjfo04rvixdt9voqfp5878cu17safh9wqjdhg0he6fdjjsqkcaicr8gvrfy8q6ark29hnoik23n8yzpa13eiwjmupnoeu5xiahna9rdy1d6f1drirbp2pf67r40iyqnflgsw5a30rtjtolymjzn339abvtdata099bd4e9zgmi7n2blzi8h3kt8oek4268z9opdzn7vz3kisg1lvzjtr50noypt6cw7gwyjj85ogmz4ag85x707migouj9406jvjygmbnlc0fn8wuuraxxwiu6r23lp46eo35yx1qklpkdbx3pq2i55m53wwznlgz6zpjjgrjoy6ft0m45rsno9jyvtjlh7suswamko139p6dcha52xbhi07ob6p506c0rur7j8hfki8vtx2plzxcvh5azhnx3zpcz9k32hm4jev8o42wnug0jia5yv2nw0yn3ph9sr98ll08xpoj67utrgqbhbqgwxmijj61mbzi0hli3jwjlhjwdomp2fm2i67dw5c 01:04:51.160 11:01:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 01:04:51.160 11:01:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 01:04:51.160 11:01:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 01:04:51.160 11:01:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 01:04:51.160 [2024-07-22 11:01:56.251551] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:51.160 [2024-07-22 11:01:56.251617] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75357 ] 01:04:51.160 { 01:04:51.160 "subsystems": [ 01:04:51.160 { 01:04:51.160 "subsystem": "bdev", 01:04:51.160 "config": [ 01:04:51.160 { 01:04:51.160 "params": { 01:04:51.160 "trtype": "pcie", 01:04:51.160 "traddr": "0000:00:10.0", 01:04:51.160 "name": "Nvme0" 01:04:51.160 }, 01:04:51.160 "method": "bdev_nvme_attach_controller" 01:04:51.160 }, 01:04:51.160 { 01:04:51.160 "method": "bdev_wait_for_examine" 01:04:51.160 } 01:04:51.160 ] 01:04:51.160 } 01:04:51.160 ] 01:04:51.160 } 01:04:51.419 [2024-07-22 11:01:56.393517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:51.419 [2024-07-22 11:01:56.438759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:51.419 [2024-07-22 11:01:56.480142] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:51.677  Copying: 4096/4096 [B] (average 4000 kBps) 01:04:51.677 01:04:51.677 11:01:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 01:04:51.677 11:01:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 01:04:51.677 11:01:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 01:04:51.677 11:01:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 01:04:51.677 [2024-07-22 11:01:56.793439] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:51.677 [2024-07-22 11:01:56.793534] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75366 ] 01:04:51.677 { 01:04:51.677 "subsystems": [ 01:04:51.677 { 01:04:51.677 "subsystem": "bdev", 01:04:51.677 "config": [ 01:04:51.677 { 01:04:51.677 "params": { 01:04:51.677 "trtype": "pcie", 01:04:51.677 "traddr": "0000:00:10.0", 01:04:51.677 "name": "Nvme0" 01:04:51.677 }, 01:04:51.677 "method": "bdev_nvme_attach_controller" 01:04:51.677 }, 01:04:51.677 { 01:04:51.677 "method": "bdev_wait_for_examine" 01:04:51.677 } 01:04:51.677 ] 01:04:51.677 } 01:04:51.677 ] 01:04:51.677 } 01:04:51.935 [2024-07-22 11:01:56.933750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:51.935 [2024-07-22 11:01:56.978416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:51.935 [2024-07-22 11:01:57.020716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:52.194  Copying: 4096/4096 [B] (average 4000 kBps) 01:04:52.194 01:04:52.194 11:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 01:04:52.195 11:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ qx4q0n3g0u4l3h278ojw7vh5fu85vx906bpckf24gz8sjwu3ws2ung742scscan7tuwoaitc5n4mp3qqlnbzbbjwl8snj6yffsdk7bog3wevdec6fjd9j3uhbq5wzdo9ezu192e3jhc2wmzriul7lwj9ba821ug8u0vm5oqpuvzqt7lw3qpkg1g62qzzwfefq290zyn7bmbkxh66ve0p7kaaa8uk1j21dxubjy288h196bi0yfngubwpymizze9q6bmjrea20yocsnnd3slij94adbfaie3jt3y85m6r2mew7vrozb83ykdmumztgxzvz4jz0aly4o1tqfj60u7eyquoprf1w2bsvcrfxvuy8sy13pd2v8jrdohtdwtfwyqg7o5nvv5wd80pd0in4r5hr0zowj12mljl0q3je6c31z9ah4in13wysjv6k9fwziu95halk9g3tunh0xqbx9v5wtmqhx3fr6ywxiantszpulq9bp124tbezh6jjaxeaukjf21221cplnyod1wggdmc3rgkx1btzte5ku6frnrsap8jbupfhfecs6ckbhrecpyv3wtw79ved6mvpaj4jswbmmhk75qepb7u7k8o0pultymnoaeqg7m4oomqj0hsuum6i1alntrek06t62a4j598sr83658vxg9f95vj8yvo3211mxl2uqsu6ly9sw56yj66x1j1tx4uola1n1xbyaf6wazb12t3ajlkdjlmpkkp5rx26leqtcxx0fpqb9h7my2g4w1g4da7if9uw2b8hyp364jwnp6gns67uhqep9ya30wzrm1x03loepc6q6m4iytettx3qdepzjkhyv0c86mzhmfxoj8zgopa4loc4s1scdtpar1a4lugzudnz5m0yvf9zh7ahu9l0fk4687j7k6cft1y5l44tjbcazlmp3pasvwi400ea88pkmapse0j842pxaa30oi0jr2sdoz0iy1ievg62h8txmsnm6h62tsaqmtui6y4v3pqlgcqh435m9mk2sz7jtf6dfnlz148i8a3vz5imfohy3d9pye5413uitbcvqgckasaz8wksz1h07cy4f6f72v3qithjhhsvdcyoi3yh7dybmbiym6e4a1z02s4hn1p5brx0aw8jgnhe2rl7h142p0toiag2ly81ci118xvc9w6yb66eqzqtntyphcdtvrpv0yfs9fm1edim2bpnn5gcmarumk91vn31tq1lrvq6arylwaoxrjibwbxuj11y2xs1uanssmbg08jaiiyxgj3fv7w1n2h55yl9d9mpucdudbqk67bp436at2zsbdg6u38mn0cvtj7pqh962tb7hpnhu0b4dnh7mk5ee4303gkzlxsurwqctexh3ywr51m5i8ewkb02eqz8myps35gxnrnj5g6m6lz10ib6wfxjrzq3zjjzupw4owq4j2knuybw8sr3f26ieohnlg4yodfj6nub0av2cednv1ozhonjaecueoymh0f5i8xwza6wqnn1risbr13k5fdmz9xd6m0sa2qd81x891hczxez8iuun15nfh9w63uface0xnj27gc6e9m304f5pxr62sr2z0a6b05p7d29nlth6l6k3z3l5ed2h0z7u2asyj8y4s3uq7m1gefpv6a1ze4a21hfxfnyl703cjmjb6hghglq3270bci0fyonrl8iuxekkavfyt8giyoe4gxeji7t3jj35r5xwcztn6xy9hzk9cm891hix2otspaxvkpzmu03dwtpcede9atx5n1us9l3xth8ccl6sdygb7034a6kxsx7w76avws86fbom71qyktub41d29i445v3deaihn7v8ljklib7o54vtt8yu1iv5njhomz6jyyjnmgcfomfahskemypm6f0opcadxewu7lfc59u6b4qtu4r1w4i23qyqo3eivuart541v1x6ebfe7ird6nhwb5vs7b987i2up1b9yn90sw5dotmniszeik8gsejqqnb0tt8eogssrq29uq62djj15myxas60apiq1z4d967otzr5dew2b4unxiddgk1pwwgcmcb5bhq1tux4hnmpwl9ofnbv3pc14qkmt1f2fzjk1atckfxn2moliub9cqqpn94kp7f5lh70cchmuuhw5tkdy1zub6a39aexp9ql4h28q7kmisv0ok6cs1v6zc8r5y7ugzs7mso6mtwzv7loivj08av1a00ia4jztnrgw0rrmb1yua8alnmko4uj0nt5uxdnqaud4jwkz2blx53aky10m9pv0r6vxe9rsgo8l6vmiwvo1zkeutkg5ocugp1ekih7xez4b5xdnx5alqq983rw3lkp5617qkzyl8p7fa53zd2vn3xhwaujhm0llcqsuevmwomrjuidwz7p1a06z02bu2z9i5i5rpf2an333c2vg41xrrhls6w3l2s8g3u2ta9tnzalslvkdp2dcz75w3nvfznghn7jkb88k7nd10w5o9k8m3jodpfiaazbggpvg0oaz98a9p7qfmf7bda249312570h1snqiw1flf3b5j478f3xe3jo5tgxg1ou4zdscdicgx07og0skzuayre5xjjkg9lfr4zsyvzhp8o2ctv0n2km8fywro48h1yanu7nwldy0mzpb312e6imzps3z2fb3heb2trt90nqim5l5lrcexz0mgsucapwb4tqcscjdysvh2u1tqfrbx4oi9k5pfkdj4kqqfwb5nk2sldlxfl6krdhlkt4a0z17b685u0fn7a84bcvy3z2rvjx90gdovx6dqbfeh0qer2ccbn23007chu58wpimhhfyfodp47a1psy5ppf6ou0nlaj6982kmqjclcvgirrtqdu5uc30dxbn4t2036ajuau2klfwbpjhpu3jdlvatp14uqf0e2ojy8uvo95rtzeogblqyq9eklx5ag2lc5jh6yioavrjz2x918mgpu0khaz84018cbeainjnff0x871zxheujsl62rk75ngdp5yr4y0z9i06f8gzm8ihphe3cadti0524ex10a8gulijh9wglkbh2l75u2xeay00bw0cnbe0utaftahhj11367e9p12m29ggwrejgz18e03m9qi0rwrewgb4ftpwmakn7gcewqgi4hwvhtmmrbpy4ccxwtccjy2zomea989483uabqjc172yv9rhs7ew1sggo3el0c788ul0nzwpip043ssyfrbckalxwtbygy8zncp6ihliukkv0bbik88onq3icihkam7zeeeu756mgb2ukx7l3sivwmnf7gto3sd2otaiunnhixsfn0rkgyxa64gyjtnoqa1ougnejnkar2112oslm5njmm7l15y8nugl0pp1kcop5i96ha4qz6m4de8e5048xsylshr4p0eqy5h9ziw5730je7im7a97jyybntwt4h6qqah7ij5neiib37jgq6qyvnjcem1azxxwdxks7n0cgqxrs17e3j6fwtiz4srg0v2qwh0zkpkxw8si44cn1xtomsh1ztdfoedpitumqhe5f43w3m6kyqiscagpootz7hau8ng3earbw0f7kjfo04rvixdt9voqfp5878cu17safh9wqjdhg0he6fdjjsqkcaicr8gvrfy8q6ark29hnoik23n8yzpa13eiwjmupnoeu5xiahna9rdy1d6f1drirbp2pf67r40iyqnflgsw5a30rtjtolymjzn339abvtdata099bd4e9zgmi7n2blzi8h3kt8oek4268z9opdzn7vz3kisg1lvzjtr50noypt6cw7gwyjj85ogmz4ag85x707migouj9406jvjygmbnlc0fn8wuuraxxwiu6r23lp46eo35yx1qklpkdbx3pq2i55m53wwznlgz6zpjjgrjoy6ft0m45rsno9jyvtjlh7suswamko139p6dcha52xbhi07ob6p506c0rur7j8hfki8vtx2plzxcvh5azhnx3zpcz9k32hm4jev8o42wnug0jia5yv2nw0yn3ph9sr98ll08xpoj67utrgqbhbqgwxmijj61mbzi0hli3jwjlhjwdomp2fm2i67dw5c == \q\x\4\q\0\n\3\g\0\u\4\l\3\h\2\7\8\o\j\w\7\v\h\5\f\u\8\5\v\x\9\0\6\b\p\c\k\f\2\4\g\z\8\s\j\w\u\3\w\s\2\u\n\g\7\4\2\s\c\s\c\a\n\7\t\u\w\o\a\i\t\c\5\n\4\m\p\3\q\q\l\n\b\z\b\b\j\w\l\8\s\n\j\6\y\f\f\s\d\k\7\b\o\g\3\w\e\v\d\e\c\6\f\j\d\9\j\3\u\h\b\q\5\w\z\d\o\9\e\z\u\1\9\2\e\3\j\h\c\2\w\m\z\r\i\u\l\7\l\w\j\9\b\a\8\2\1\u\g\8\u\0\v\m\5\o\q\p\u\v\z\q\t\7\l\w\3\q\p\k\g\1\g\6\2\q\z\z\w\f\e\f\q\2\9\0\z\y\n\7\b\m\b\k\x\h\6\6\v\e\0\p\7\k\a\a\a\8\u\k\1\j\2\1\d\x\u\b\j\y\2\8\8\h\1\9\6\b\i\0\y\f\n\g\u\b\w\p\y\m\i\z\z\e\9\q\6\b\m\j\r\e\a\2\0\y\o\c\s\n\n\d\3\s\l\i\j\9\4\a\d\b\f\a\i\e\3\j\t\3\y\8\5\m\6\r\2\m\e\w\7\v\r\o\z\b\8\3\y\k\d\m\u\m\z\t\g\x\z\v\z\4\j\z\0\a\l\y\4\o\1\t\q\f\j\6\0\u\7\e\y\q\u\o\p\r\f\1\w\2\b\s\v\c\r\f\x\v\u\y\8\s\y\1\3\p\d\2\v\8\j\r\d\o\h\t\d\w\t\f\w\y\q\g\7\o\5\n\v\v\5\w\d\8\0\p\d\0\i\n\4\r\5\h\r\0\z\o\w\j\1\2\m\l\j\l\0\q\3\j\e\6\c\3\1\z\9\a\h\4\i\n\1\3\w\y\s\j\v\6\k\9\f\w\z\i\u\9\5\h\a\l\k\9\g\3\t\u\n\h\0\x\q\b\x\9\v\5\w\t\m\q\h\x\3\f\r\6\y\w\x\i\a\n\t\s\z\p\u\l\q\9\b\p\1\2\4\t\b\e\z\h\6\j\j\a\x\e\a\u\k\j\f\2\1\2\2\1\c\p\l\n\y\o\d\1\w\g\g\d\m\c\3\r\g\k\x\1\b\t\z\t\e\5\k\u\6\f\r\n\r\s\a\p\8\j\b\u\p\f\h\f\e\c\s\6\c\k\b\h\r\e\c\p\y\v\3\w\t\w\7\9\v\e\d\6\m\v\p\a\j\4\j\s\w\b\m\m\h\k\7\5\q\e\p\b\7\u\7\k\8\o\0\p\u\l\t\y\m\n\o\a\e\q\g\7\m\4\o\o\m\q\j\0\h\s\u\u\m\6\i\1\a\l\n\t\r\e\k\0\6\t\6\2\a\4\j\5\9\8\s\r\8\3\6\5\8\v\x\g\9\f\9\5\v\j\8\y\v\o\3\2\1\1\m\x\l\2\u\q\s\u\6\l\y\9\s\w\5\6\y\j\6\6\x\1\j\1\t\x\4\u\o\l\a\1\n\1\x\b\y\a\f\6\w\a\z\b\1\2\t\3\a\j\l\k\d\j\l\m\p\k\k\p\5\r\x\2\6\l\e\q\t\c\x\x\0\f\p\q\b\9\h\7\m\y\2\g\4\w\1\g\4\d\a\7\i\f\9\u\w\2\b\8\h\y\p\3\6\4\j\w\n\p\6\g\n\s\6\7\u\h\q\e\p\9\y\a\3\0\w\z\r\m\1\x\0\3\l\o\e\p\c\6\q\6\m\4\i\y\t\e\t\t\x\3\q\d\e\p\z\j\k\h\y\v\0\c\8\6\m\z\h\m\f\x\o\j\8\z\g\o\p\a\4\l\o\c\4\s\1\s\c\d\t\p\a\r\1\a\4\l\u\g\z\u\d\n\z\5\m\0\y\v\f\9\z\h\7\a\h\u\9\l\0\f\k\4\6\8\7\j\7\k\6\c\f\t\1\y\5\l\4\4\t\j\b\c\a\z\l\m\p\3\p\a\s\v\w\i\4\0\0\e\a\8\8\p\k\m\a\p\s\e\0\j\8\4\2\p\x\a\a\3\0\o\i\0\j\r\2\s\d\o\z\0\i\y\1\i\e\v\g\6\2\h\8\t\x\m\s\n\m\6\h\6\2\t\s\a\q\m\t\u\i\6\y\4\v\3\p\q\l\g\c\q\h\4\3\5\m\9\m\k\2\s\z\7\j\t\f\6\d\f\n\l\z\1\4\8\i\8\a\3\v\z\5\i\m\f\o\h\y\3\d\9\p\y\e\5\4\1\3\u\i\t\b\c\v\q\g\c\k\a\s\a\z\8\w\k\s\z\1\h\0\7\c\y\4\f\6\f\7\2\v\3\q\i\t\h\j\h\h\s\v\d\c\y\o\i\3\y\h\7\d\y\b\m\b\i\y\m\6\e\4\a\1\z\0\2\s\4\h\n\1\p\5\b\r\x\0\a\w\8\j\g\n\h\e\2\r\l\7\h\1\4\2\p\0\t\o\i\a\g\2\l\y\8\1\c\i\1\1\8\x\v\c\9\w\6\y\b\6\6\e\q\z\q\t\n\t\y\p\h\c\d\t\v\r\p\v\0\y\f\s\9\f\m\1\e\d\i\m\2\b\p\n\n\5\g\c\m\a\r\u\m\k\9\1\v\n\3\1\t\q\1\l\r\v\q\6\a\r\y\l\w\a\o\x\r\j\i\b\w\b\x\u\j\1\1\y\2\x\s\1\u\a\n\s\s\m\b\g\0\8\j\a\i\i\y\x\g\j\3\f\v\7\w\1\n\2\h\5\5\y\l\9\d\9\m\p\u\c\d\u\d\b\q\k\6\7\b\p\4\3\6\a\t\2\z\s\b\d\g\6\u\3\8\m\n\0\c\v\t\j\7\p\q\h\9\6\2\t\b\7\h\p\n\h\u\0\b\4\d\n\h\7\m\k\5\e\e\4\3\0\3\g\k\z\l\x\s\u\r\w\q\c\t\e\x\h\3\y\w\r\5\1\m\5\i\8\e\w\k\b\0\2\e\q\z\8\m\y\p\s\3\5\g\x\n\r\n\j\5\g\6\m\6\l\z\1\0\i\b\6\w\f\x\j\r\z\q\3\z\j\j\z\u\p\w\4\o\w\q\4\j\2\k\n\u\y\b\w\8\s\r\3\f\2\6\i\e\o\h\n\l\g\4\y\o\d\f\j\6\n\u\b\0\a\v\2\c\e\d\n\v\1\o\z\h\o\n\j\a\e\c\u\e\o\y\m\h\0\f\5\i\8\x\w\z\a\6\w\q\n\n\1\r\i\s\b\r\1\3\k\5\f\d\m\z\9\x\d\6\m\0\s\a\2\q\d\8\1\x\8\9\1\h\c\z\x\e\z\8\i\u\u\n\1\5\n\f\h\9\w\6\3\u\f\a\c\e\0\x\n\j\2\7\g\c\6\e\9\m\3\0\4\f\5\p\x\r\6\2\s\r\2\z\0\a\6\b\0\5\p\7\d\2\9\n\l\t\h\6\l\6\k\3\z\3\l\5\e\d\2\h\0\z\7\u\2\a\s\y\j\8\y\4\s\3\u\q\7\m\1\g\e\f\p\v\6\a\1\z\e\4\a\2\1\h\f\x\f\n\y\l\7\0\3\c\j\m\j\b\6\h\g\h\g\l\q\3\2\7\0\b\c\i\0\f\y\o\n\r\l\8\i\u\x\e\k\k\a\v\f\y\t\8\g\i\y\o\e\4\g\x\e\j\i\7\t\3\j\j\3\5\r\5\x\w\c\z\t\n\6\x\y\9\h\z\k\9\c\m\8\9\1\h\i\x\2\o\t\s\p\a\x\v\k\p\z\m\u\0\3\d\w\t\p\c\e\d\e\9\a\t\x\5\n\1\u\s\9\l\3\x\t\h\8\c\c\l\6\s\d\y\g\b\7\0\3\4\a\6\k\x\s\x\7\w\7\6\a\v\w\s\8\6\f\b\o\m\7\1\q\y\k\t\u\b\4\1\d\2\9\i\4\4\5\v\3\d\e\a\i\h\n\7\v\8\l\j\k\l\i\b\7\o\5\4\v\t\t\8\y\u\1\i\v\5\n\j\h\o\m\z\6\j\y\y\j\n\m\g\c\f\o\m\f\a\h\s\k\e\m\y\p\m\6\f\0\o\p\c\a\d\x\e\w\u\7\l\f\c\5\9\u\6\b\4\q\t\u\4\r\1\w\4\i\2\3\q\y\q\o\3\e\i\v\u\a\r\t\5\4\1\v\1\x\6\e\b\f\e\7\i\r\d\6\n\h\w\b\5\v\s\7\b\9\8\7\i\2\u\p\1\b\9\y\n\9\0\s\w\5\d\o\t\m\n\i\s\z\e\i\k\8\g\s\e\j\q\q\n\b\0\t\t\8\e\o\g\s\s\r\q\2\9\u\q\6\2\d\j\j\1\5\m\y\x\a\s\6\0\a\p\i\q\1\z\4\d\9\6\7\o\t\z\r\5\d\e\w\2\b\4\u\n\x\i\d\d\g\k\1\p\w\w\g\c\m\c\b\5\b\h\q\1\t\u\x\4\h\n\m\p\w\l\9\o\f\n\b\v\3\p\c\1\4\q\k\m\t\1\f\2\f\z\j\k\1\a\t\c\k\f\x\n\2\m\o\l\i\u\b\9\c\q\q\p\n\9\4\k\p\7\f\5\l\h\7\0\c\c\h\m\u\u\h\w\5\t\k\d\y\1\z\u\b\6\a\3\9\a\e\x\p\9\q\l\4\h\2\8\q\7\k\m\i\s\v\0\o\k\6\c\s\1\v\6\z\c\8\r\5\y\7\u\g\z\s\7\m\s\o\6\m\t\w\z\v\7\l\o\i\v\j\0\8\a\v\1\a\0\0\i\a\4\j\z\t\n\r\g\w\0\r\r\m\b\1\y\u\a\8\a\l\n\m\k\o\4\u\j\0\n\t\5\u\x\d\n\q\a\u\d\4\j\w\k\z\2\b\l\x\5\3\a\k\y\1\0\m\9\p\v\0\r\6\v\x\e\9\r\s\g\o\8\l\6\v\m\i\w\v\o\1\z\k\e\u\t\k\g\5\o\c\u\g\p\1\e\k\i\h\7\x\e\z\4\b\5\x\d\n\x\5\a\l\q\q\9\8\3\r\w\3\l\k\p\5\6\1\7\q\k\z\y\l\8\p\7\f\a\5\3\z\d\2\v\n\3\x\h\w\a\u\j\h\m\0\l\l\c\q\s\u\e\v\m\w\o\m\r\j\u\i\d\w\z\7\p\1\a\0\6\z\0\2\b\u\2\z\9\i\5\i\5\r\p\f\2\a\n\3\3\3\c\2\v\g\4\1\x\r\r\h\l\s\6\w\3\l\2\s\8\g\3\u\2\t\a\9\t\n\z\a\l\s\l\v\k\d\p\2\d\c\z\7\5\w\3\n\v\f\z\n\g\h\n\7\j\k\b\8\8\k\7\n\d\1\0\w\5\o\9\k\8\m\3\j\o\d\p\f\i\a\a\z\b\g\g\p\v\g\0\o\a\z\9\8\a\9\p\7\q\f\m\f\7\b\d\a\2\4\9\3\1\2\5\7\0\h\1\s\n\q\i\w\1\f\l\f\3\b\5\j\4\7\8\f\3\x\e\3\j\o\5\t\g\x\g\1\o\u\4\z\d\s\c\d\i\c\g\x\0\7\o\g\0\s\k\z\u\a\y\r\e\5\x\j\j\k\g\9\l\f\r\4\z\s\y\v\z\h\p\8\o\2\c\t\v\0\n\2\k\m\8\f\y\w\r\o\4\8\h\1\y\a\n\u\7\n\w\l\d\y\0\m\z\p\b\3\1\2\e\6\i\m\z\p\s\3\z\2\f\b\3\h\e\b\2\t\r\t\9\0\n\q\i\m\5\l\5\l\r\c\e\x\z\0\m\g\s\u\c\a\p\w\b\4\t\q\c\s\c\j\d\y\s\v\h\2\u\1\t\q\f\r\b\x\4\o\i\9\k\5\p\f\k\d\j\4\k\q\q\f\w\b\5\n\k\2\s\l\d\l\x\f\l\6\k\r\d\h\l\k\t\4\a\0\z\1\7\b\6\8\5\u\0\f\n\7\a\8\4\b\c\v\y\3\z\2\r\v\j\x\9\0\g\d\o\v\x\6\d\q\b\f\e\h\0\q\e\r\2\c\c\b\n\2\3\0\0\7\c\h\u\5\8\w\p\i\m\h\h\f\y\f\o\d\p\4\7\a\1\p\s\y\5\p\p\f\6\o\u\0\n\l\a\j\6\9\8\2\k\m\q\j\c\l\c\v\g\i\r\r\t\q\d\u\5\u\c\3\0\d\x\b\n\4\t\2\0\3\6\a\j\u\a\u\2\k\l\f\w\b\p\j\h\p\u\3\j\d\l\v\a\t\p\1\4\u\q\f\0\e\2\o\j\y\8\u\v\o\9\5\r\t\z\e\o\g\b\l\q\y\q\9\e\k\l\x\5\a\g\2\l\c\5\j\h\6\y\i\o\a\v\r\j\z\2\x\9\1\8\m\g\p\u\0\k\h\a\z\8\4\0\1\8\c\b\e\a\i\n\j\n\f\f\0\x\8\7\1\z\x\h\e\u\j\s\l\6\2\r\k\7\5\n\g\d\p\5\y\r\4\y\0\z\9\i\0\6\f\8\g\z\m\8\i\h\p\h\e\3\c\a\d\t\i\0\5\2\4\e\x\1\0\a\8\g\u\l\i\j\h\9\w\g\l\k\b\h\2\l\7\5\u\2\x\e\a\y\0\0\b\w\0\c\n\b\e\0\u\t\a\f\t\a\h\h\j\1\1\3\6\7\e\9\p\1\2\m\2\9\g\g\w\r\e\j\g\z\1\8\e\0\3\m\9\q\i\0\r\w\r\e\w\g\b\4\f\t\p\w\m\a\k\n\7\g\c\e\w\q\g\i\4\h\w\v\h\t\m\m\r\b\p\y\4\c\c\x\w\t\c\c\j\y\2\z\o\m\e\a\9\8\9\4\8\3\u\a\b\q\j\c\1\7\2\y\v\9\r\h\s\7\e\w\1\s\g\g\o\3\e\l\0\c\7\8\8\u\l\0\n\z\w\p\i\p\0\4\3\s\s\y\f\r\b\c\k\a\l\x\w\t\b\y\g\y\8\z\n\c\p\6\i\h\l\i\u\k\k\v\0\b\b\i\k\8\8\o\n\q\3\i\c\i\h\k\a\m\7\z\e\e\e\u\7\5\6\m\g\b\2\u\k\x\7\l\3\s\i\v\w\m\n\f\7\g\t\o\3\s\d\2\o\t\a\i\u\n\n\h\i\x\s\f\n\0\r\k\g\y\x\a\6\4\g\y\j\t\n\o\q\a\1\o\u\g\n\e\j\n\k\a\r\2\1\1\2\o\s\l\m\5\n\j\m\m\7\l\1\5\y\8\n\u\g\l\0\p\p\1\k\c\o\p\5\i\9\6\h\a\4\q\z\6\m\4\d\e\8\e\5\0\4\8\x\s\y\l\s\h\r\4\p\0\e\q\y\5\h\9\z\i\w\5\7\3\0\j\e\7\i\m\7\a\9\7\j\y\y\b\n\t\w\t\4\h\6\q\q\a\h\7\i\j\5\n\e\i\i\b\3\7\j\g\q\6\q\y\v\n\j\c\e\m\1\a\z\x\x\w\d\x\k\s\7\n\0\c\g\q\x\r\s\1\7\e\3\j\6\f\w\t\i\z\4\s\r\g\0\v\2\q\w\h\0\z\k\p\k\x\w\8\s\i\4\4\c\n\1\x\t\o\m\s\h\1\z\t\d\f\o\e\d\p\i\t\u\m\q\h\e\5\f\4\3\w\3\m\6\k\y\q\i\s\c\a\g\p\o\o\t\z\7\h\a\u\8\n\g\3\e\a\r\b\w\0\f\7\k\j\f\o\0\4\r\v\i\x\d\t\9\v\o\q\f\p\5\8\7\8\c\u\1\7\s\a\f\h\9\w\q\j\d\h\g\0\h\e\6\f\d\j\j\s\q\k\c\a\i\c\r\8\g\v\r\f\y\8\q\6\a\r\k\2\9\h\n\o\i\k\2\3\n\8\y\z\p\a\1\3\e\i\w\j\m\u\p\n\o\e\u\5\x\i\a\h\n\a\9\r\d\y\1\d\6\f\1\d\r\i\r\b\p\2\p\f\6\7\r\4\0\i\y\q\n\f\l\g\s\w\5\a\3\0\r\t\j\t\o\l\y\m\j\z\n\3\3\9\a\b\v\t\d\a\t\a\0\9\9\b\d\4\e\9\z\g\m\i\7\n\2\b\l\z\i\8\h\3\k\t\8\o\e\k\4\2\6\8\z\9\o\p\d\z\n\7\v\z\3\k\i\s\g\1\l\v\z\j\t\r\5\0\n\o\y\p\t\6\c\w\7\g\w\y\j\j\8\5\o\g\m\z\4\a\g\8\5\x\7\0\7\m\i\g\o\u\j\9\4\0\6\j\v\j\y\g\m\b\n\l\c\0\f\n\8\w\u\u\r\a\x\x\w\i\u\6\r\2\3\l\p\4\6\e\o\3\5\y\x\1\q\k\l\p\k\d\b\x\3\p\q\2\i\5\5\m\5\3\w\w\z\n\l\g\z\6\z\p\j\j\g\r\j\o\y\6\f\t\0\m\4\5\r\s\n\o\9\j\y\v\t\j\l\h\7\s\u\s\w\a\m\k\o\1\3\9\p\6\d\c\h\a\5\2\x\b\h\i\0\7\o\b\6\p\5\0\6\c\0\r\u\r\7\j\8\h\f\k\i\8\v\t\x\2\p\l\z\x\c\v\h\5\a\z\h\n\x\3\z\p\c\z\9\k\3\2\h\m\4\j\e\v\8\o\4\2\w\n\u\g\0\j\i\a\5\y\v\2\n\w\0\y\n\3\p\h\9\s\r\9\8\l\l\0\8\x\p\o\j\6\7\u\t\r\g\q\b\h\b\q\g\w\x\m\i\j\j\6\1\m\b\z\i\0\h\l\i\3\j\w\j\l\h\j\w\d\o\m\p\2\f\m\2\i\6\7\d\w\5\c ]] 01:04:52.195 01:04:52.195 real 0m1.132s 01:04:52.195 user 0m0.731s 01:04:52.195 sys 0m0.515s 01:04:52.195 11:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:52.195 ************************************ 01:04:52.195 END TEST dd_rw_offset 01:04:52.195 ************************************ 01:04:52.195 11:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 01:04:52.195 11:01:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 01:04:52.195 11:01:57 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 01:04:52.195 11:01:57 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 01:04:52.195 11:01:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 01:04:52.195 11:01:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 01:04:52.195 11:01:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 01:04:52.195 11:01:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 01:04:52.195 11:01:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 01:04:52.195 11:01:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 01:04:52.195 11:01:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 01:04:52.195 11:01:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 01:04:52.195 11:01:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 01:04:52.452 [2024-07-22 11:01:57.399818] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:52.452 [2024-07-22 11:01:57.400034] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75401 ] 01:04:52.452 { 01:04:52.452 "subsystems": [ 01:04:52.452 { 01:04:52.452 "subsystem": "bdev", 01:04:52.452 "config": [ 01:04:52.452 { 01:04:52.452 "params": { 01:04:52.452 "trtype": "pcie", 01:04:52.452 "traddr": "0000:00:10.0", 01:04:52.452 "name": "Nvme0" 01:04:52.452 }, 01:04:52.452 "method": "bdev_nvme_attach_controller" 01:04:52.452 }, 01:04:52.452 { 01:04:52.452 "method": "bdev_wait_for_examine" 01:04:52.452 } 01:04:52.452 ] 01:04:52.452 } 01:04:52.452 ] 01:04:52.452 } 01:04:52.452 [2024-07-22 11:01:57.540230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:52.452 [2024-07-22 11:01:57.585207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:52.452 [2024-07-22 11:01:57.626731] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:52.727  Copying: 1024/1024 [kB] (average 500 MBps) 01:04:52.727 01:04:52.727 11:01:57 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:04:52.727 ************************************ 01:04:52.727 END TEST spdk_dd_basic_rw 01:04:52.727 ************************************ 01:04:52.727 01:04:52.727 real 0m15.364s 01:04:52.727 user 0m10.359s 01:04:52.727 sys 0m6.018s 01:04:52.727 11:01:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:52.727 11:01:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 01:04:53.009 11:01:57 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 01:04:53.009 11:01:57 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 01:04:53.009 11:01:57 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:04:53.009 11:01:57 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:53.009 11:01:57 spdk_dd -- common/autotest_common.sh@10 -- # set +x 01:04:53.009 ************************************ 01:04:53.009 START TEST spdk_dd_posix 01:04:53.009 ************************************ 01:04:53.009 11:01:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 01:04:53.009 * Looking for test storage... 01:04:53.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 01:04:53.009 11:01:58 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:04:53.009 11:01:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:04:53.009 11:01:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:04:53.009 11:01:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:04:53.009 11:01:58 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:53.009 11:01:58 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:53.009 11:01:58 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:53.009 11:01:58 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 01:04:53.009 11:01:58 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:53.009 11:01:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 01:04:53.009 11:01:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 01:04:53.009 11:01:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 01:04:53.009 11:01:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 01:04:53.009 11:01:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:04:53.009 11:01:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:04:53.009 11:01:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 01:04:53.009 11:01:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 01:04:53.009 * First test run, liburing in use 01:04:53.009 11:01:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 01:04:53.010 11:01:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:04:53.010 11:01:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:53.010 11:01:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 01:04:53.010 ************************************ 01:04:53.010 START TEST dd_flag_append 01:04:53.010 ************************************ 01:04:53.010 11:01:58 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 01:04:53.010 11:01:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 01:04:53.010 11:01:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 01:04:53.010 11:01:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 01:04:53.010 11:01:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 01:04:53.010 11:01:58 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 01:04:53.010 11:01:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=o44kzicernwa6ppaijpz7y8q18847qoo 01:04:53.010 11:01:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 01:04:53.010 11:01:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 01:04:53.010 11:01:58 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 01:04:53.010 11:01:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=rxpt5e54wylztf5jdqg9bu7qnchqi9c5 01:04:53.010 11:01:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s o44kzicernwa6ppaijpz7y8q18847qoo 01:04:53.010 11:01:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s rxpt5e54wylztf5jdqg9bu7qnchqi9c5 01:04:53.010 11:01:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 01:04:53.010 [2024-07-22 11:01:58.156608] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:53.010 [2024-07-22 11:01:58.156677] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75458 ] 01:04:53.268 [2024-07-22 11:01:58.299154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:53.268 [2024-07-22 11:01:58.344556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:53.268 [2024-07-22 11:01:58.385390] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:53.525  Copying: 32/32 [B] (average 31 kBps) 01:04:53.525 01:04:53.525 11:01:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ rxpt5e54wylztf5jdqg9bu7qnchqi9c5o44kzicernwa6ppaijpz7y8q18847qoo == \r\x\p\t\5\e\5\4\w\y\l\z\t\f\5\j\d\q\g\9\b\u\7\q\n\c\h\q\i\9\c\5\o\4\4\k\z\i\c\e\r\n\w\a\6\p\p\a\i\j\p\z\7\y\8\q\1\8\8\4\7\q\o\o ]] 01:04:53.525 01:04:53.525 real 0m0.474s 01:04:53.525 user 0m0.235s 01:04:53.525 sys 0m0.231s 01:04:53.525 11:01:58 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:53.525 11:01:58 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 01:04:53.525 ************************************ 01:04:53.525 END TEST dd_flag_append 01:04:53.525 ************************************ 01:04:53.525 11:01:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 01:04:53.525 11:01:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 01:04:53.525 11:01:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:04:53.525 11:01:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:53.526 11:01:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 01:04:53.526 ************************************ 01:04:53.526 START TEST dd_flag_directory 01:04:53.526 ************************************ 01:04:53.526 11:01:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 01:04:53.526 11:01:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:04:53.526 11:01:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 01:04:53.526 11:01:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:04:53.526 11:01:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:04:53.526 11:01:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:04:53.526 11:01:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:04:53.526 11:01:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:04:53.526 11:01:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:04:53.526 11:01:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:04:53.526 11:01:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:04:53.526 11:01:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:04:53.526 11:01:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:04:53.526 [2024-07-22 11:01:58.707924] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:53.526 [2024-07-22 11:01:58.707993] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75488 ] 01:04:53.784 [2024-07-22 11:01:58.849114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:53.784 [2024-07-22 11:01:58.895035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:53.784 [2024-07-22 11:01:58.942758] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:53.784 [2024-07-22 11:01:58.963814] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 01:04:53.784 [2024-07-22 11:01:58.963875] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 01:04:53.784 [2024-07-22 11:01:58.963888] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:04:54.042 [2024-07-22 11:01:59.052837] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 01:04:54.042 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 01:04:54.042 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:04:54.042 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 01:04:54.042 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 01:04:54.042 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 01:04:54.042 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:04:54.042 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 01:04:54.042 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 01:04:54.042 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 01:04:54.042 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:04:54.042 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:04:54.042 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:04:54.042 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:04:54.042 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:04:54.042 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:04:54.043 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:04:54.043 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:04:54.043 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 01:04:54.043 [2024-07-22 11:01:59.191667] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:54.043 [2024-07-22 11:01:59.191756] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75496 ] 01:04:54.301 [2024-07-22 11:01:59.334025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:54.301 [2024-07-22 11:01:59.379911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:54.301 [2024-07-22 11:01:59.420771] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:54.301 [2024-07-22 11:01:59.442321] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 01:04:54.301 [2024-07-22 11:01:59.442368] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 01:04:54.301 [2024-07-22 11:01:59.442381] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:04:54.559 [2024-07-22 11:01:59.532165] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 01:04:54.559 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 01:04:54.559 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:04:54.559 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 01:04:54.559 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 01:04:54.559 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 01:04:54.559 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:04:54.559 01:04:54.559 real 0m0.961s 01:04:54.559 user 0m0.480s 01:04:54.559 sys 0m0.271s 01:04:54.559 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:54.559 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 01:04:54.559 ************************************ 01:04:54.559 END TEST dd_flag_directory 01:04:54.559 ************************************ 01:04:54.559 11:01:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 01:04:54.559 11:01:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 01:04:54.559 11:01:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:04:54.559 11:01:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:54.559 11:01:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 01:04:54.559 ************************************ 01:04:54.559 START TEST dd_flag_nofollow 01:04:54.559 ************************************ 01:04:54.559 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 01:04:54.559 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 01:04:54.559 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 01:04:54.559 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 01:04:54.559 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 01:04:54.559 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:04:54.559 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 01:04:54.559 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:04:54.559 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:04:54.559 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:04:54.559 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:04:54.559 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:04:54.559 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:04:54.559 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:04:54.559 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:04:54.559 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:04:54.559 11:01:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:04:54.559 [2024-07-22 11:01:59.750860] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:54.559 [2024-07-22 11:01:59.750936] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75526 ] 01:04:54.818 [2024-07-22 11:01:59.892439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:54.818 [2024-07-22 11:01:59.935416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:54.818 [2024-07-22 11:01:59.976281] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:54.818 [2024-07-22 11:01:59.996744] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 01:04:54.818 [2024-07-22 11:01:59.996791] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 01:04:54.818 [2024-07-22 11:01:59.996804] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:04:55.077 [2024-07-22 11:02:00.088239] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 01:04:55.077 11:02:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 01:04:55.077 11:02:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:04:55.077 11:02:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 01:04:55.077 11:02:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 01:04:55.077 11:02:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 01:04:55.077 11:02:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:04:55.077 11:02:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 01:04:55.077 11:02:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 01:04:55.077 11:02:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 01:04:55.077 11:02:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:04:55.077 11:02:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:04:55.077 11:02:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:04:55.077 11:02:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:04:55.077 11:02:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:04:55.077 11:02:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:04:55.077 11:02:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:04:55.077 11:02:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:04:55.077 11:02:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 01:04:55.077 [2024-07-22 11:02:00.222075] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:55.077 [2024-07-22 11:02:00.222152] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75530 ] 01:04:55.337 [2024-07-22 11:02:00.362653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:55.337 [2024-07-22 11:02:00.407111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:55.337 [2024-07-22 11:02:00.447918] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:55.337 [2024-07-22 11:02:00.469146] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 01:04:55.337 [2024-07-22 11:02:00.469197] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 01:04:55.337 [2024-07-22 11:02:00.469211] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:04:55.596 [2024-07-22 11:02:00.559979] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 01:04:55.596 11:02:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 01:04:55.596 11:02:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:04:55.596 11:02:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 01:04:55.596 11:02:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 01:04:55.596 11:02:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 01:04:55.596 11:02:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:04:55.596 11:02:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 01:04:55.596 11:02:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 01:04:55.596 11:02:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 01:04:55.596 11:02:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:04:55.596 [2024-07-22 11:02:00.697483] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:55.596 [2024-07-22 11:02:00.697554] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75543 ] 01:04:55.855 [2024-07-22 11:02:00.838491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:55.855 [2024-07-22 11:02:00.885183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:55.855 [2024-07-22 11:02:00.926321] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:56.115  Copying: 512/512 [B] (average 500 kBps) 01:04:56.115 01:04:56.115 11:02:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ zuxi5uxxgbjabxm21gk8z685r1knhq7iig4sodpu4thduwwh0s5kpnneke084a2jijzo5u6nf4gxv3v8fao5gbpb41dvkdubftsg6glhbx4utgdnm2l8jzcmfyheajz3bwcflybgka8i0c55ugkyv9wexs3fidh2odg52sz7zy29g218ypdy5znsmsf0iwqhfsq5dvyt2svy3h97ti2pym10edmsjh4l4krfoqxh4zdmr4464ipvz6um3203hgqtyn664ju08s107igi6ty8h8s7m2yowq8ibeysitfsu779vb6ijy8vzrhbgfqp0pv9mps9n0qb4zfqik7pikf3d2lb2fj240spucxsrfia74i9fxulyszmh7fqnolapviqnsjqke0trd7z9ssgerdaeba9qzbtztb8l6ezdy9rvtsmr0q0d1mlzjv2ccziwgix1skliau2q06ikajeiokp73miw44uqn4uoc4yyhsm5fl713xd4rm1eetlzzcbeltg == \z\u\x\i\5\u\x\x\g\b\j\a\b\x\m\2\1\g\k\8\z\6\8\5\r\1\k\n\h\q\7\i\i\g\4\s\o\d\p\u\4\t\h\d\u\w\w\h\0\s\5\k\p\n\n\e\k\e\0\8\4\a\2\j\i\j\z\o\5\u\6\n\f\4\g\x\v\3\v\8\f\a\o\5\g\b\p\b\4\1\d\v\k\d\u\b\f\t\s\g\6\g\l\h\b\x\4\u\t\g\d\n\m\2\l\8\j\z\c\m\f\y\h\e\a\j\z\3\b\w\c\f\l\y\b\g\k\a\8\i\0\c\5\5\u\g\k\y\v\9\w\e\x\s\3\f\i\d\h\2\o\d\g\5\2\s\z\7\z\y\2\9\g\2\1\8\y\p\d\y\5\z\n\s\m\s\f\0\i\w\q\h\f\s\q\5\d\v\y\t\2\s\v\y\3\h\9\7\t\i\2\p\y\m\1\0\e\d\m\s\j\h\4\l\4\k\r\f\o\q\x\h\4\z\d\m\r\4\4\6\4\i\p\v\z\6\u\m\3\2\0\3\h\g\q\t\y\n\6\6\4\j\u\0\8\s\1\0\7\i\g\i\6\t\y\8\h\8\s\7\m\2\y\o\w\q\8\i\b\e\y\s\i\t\f\s\u\7\7\9\v\b\6\i\j\y\8\v\z\r\h\b\g\f\q\p\0\p\v\9\m\p\s\9\n\0\q\b\4\z\f\q\i\k\7\p\i\k\f\3\d\2\l\b\2\f\j\2\4\0\s\p\u\c\x\s\r\f\i\a\7\4\i\9\f\x\u\l\y\s\z\m\h\7\f\q\n\o\l\a\p\v\i\q\n\s\j\q\k\e\0\t\r\d\7\z\9\s\s\g\e\r\d\a\e\b\a\9\q\z\b\t\z\t\b\8\l\6\e\z\d\y\9\r\v\t\s\m\r\0\q\0\d\1\m\l\z\j\v\2\c\c\z\i\w\g\i\x\1\s\k\l\i\a\u\2\q\0\6\i\k\a\j\e\i\o\k\p\7\3\m\i\w\4\4\u\q\n\4\u\o\c\4\y\y\h\s\m\5\f\l\7\1\3\x\d\4\r\m\1\e\e\t\l\z\z\c\b\e\l\t\g ]] 01:04:56.115 01:04:56.115 real 0m1.435s 01:04:56.115 user 0m0.717s 01:04:56.115 sys 0m0.503s 01:04:56.115 11:02:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:56.115 11:02:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 01:04:56.115 ************************************ 01:04:56.115 END TEST dd_flag_nofollow 01:04:56.115 ************************************ 01:04:56.115 11:02:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 01:04:56.115 11:02:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 01:04:56.115 11:02:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:04:56.115 11:02:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:56.115 11:02:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 01:04:56.115 ************************************ 01:04:56.115 START TEST dd_flag_noatime 01:04:56.115 ************************************ 01:04:56.115 11:02:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 01:04:56.115 11:02:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 01:04:56.115 11:02:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 01:04:56.115 11:02:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 01:04:56.115 11:02:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 01:04:56.115 11:02:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 01:04:56.115 11:02:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:04:56.115 11:02:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721646120 01:04:56.115 11:02:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:04:56.115 11:02:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721646121 01:04:56.115 11:02:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 01:04:57.051 11:02:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:04:57.310 [2024-07-22 11:02:02.272385] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:57.310 [2024-07-22 11:02:02.272463] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75580 ] 01:04:57.310 [2024-07-22 11:02:02.413561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:57.310 [2024-07-22 11:02:02.459085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:57.310 [2024-07-22 11:02:02.499694] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:57.567  Copying: 512/512 [B] (average 500 kBps) 01:04:57.567 01:04:57.567 11:02:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:04:57.567 11:02:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721646120 )) 01:04:57.567 11:02:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:04:57.567 11:02:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721646121 )) 01:04:57.567 11:02:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:04:57.567 [2024-07-22 11:02:02.749235] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:57.567 [2024-07-22 11:02:02.749304] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75599 ] 01:04:57.824 [2024-07-22 11:02:02.878532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:57.824 [2024-07-22 11:02:02.928142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:57.824 [2024-07-22 11:02:02.968639] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:58.081  Copying: 512/512 [B] (average 500 kBps) 01:04:58.081 01:04:58.081 11:02:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:04:58.081 11:02:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721646122 )) 01:04:58.081 01:04:58.081 real 0m1.973s 01:04:58.081 user 0m0.470s 01:04:58.081 sys 0m0.494s 01:04:58.081 11:02:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 01:04:58.081 ************************************ 01:04:58.081 END TEST dd_flag_noatime 01:04:58.081 11:02:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 01:04:58.081 ************************************ 01:04:58.081 11:02:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 01:04:58.081 11:02:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 01:04:58.081 11:02:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:04:58.081 11:02:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 01:04:58.081 11:02:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 01:04:58.081 ************************************ 01:04:58.081 START TEST dd_flags_misc 01:04:58.081 ************************************ 01:04:58.081 11:02:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 01:04:58.081 11:02:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 01:04:58.081 11:02:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 01:04:58.081 11:02:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 01:04:58.081 11:02:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 01:04:58.081 11:02:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 01:04:58.082 11:02:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 01:04:58.082 11:02:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 01:04:58.082 11:02:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:04:58.082 11:02:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 01:04:58.338 [2024-07-22 11:02:03.301031] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:58.338 [2024-07-22 11:02:03.301099] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75622 ] 01:04:58.338 [2024-07-22 11:02:03.443689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:58.338 [2024-07-22 11:02:03.499974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:58.338 [2024-07-22 11:02:03.541066] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:58.596  Copying: 512/512 [B] (average 500 kBps) 01:04:58.596 01:04:58.597 11:02:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4ich2xlpxancqep5d6bhgrru1y0rar22xa12uxvmf8hikqa3uqi115fjdguxe9ja3e9ghhliudsh88iyy6do83wbf15s9unroza922o15xydn6cq5kt4dqglb6m9du3ymjy58571zyajbkk200f0784g4h76raulti23rumxgbz29ctgi15ejgryw9paeqqqhypj580oed3j2t4r8zzkznr7w9sigtongvsd4p8hsu2aqcft7nov1sotp2m5uc1ix7et95x6eekncexsscuuee4hw95u9ex8n33811g4gg4jqdx8d08prsonqqntb2x249907fyz0rlcji7u2bs9e2srrnxmiqspnxysoou8ehouzlchpu4e9dy5izdju6abysortrokl0dlmxcodfqqyfcuq34133zu3ol7dbxuhtf831fizgisj5lxmimhe2acu4f0s14ept347qv32zdu30o47e0n0wp2q6l3tlypo3s7vj1liw5lg08sz82k12h9 == \4\i\c\h\2\x\l\p\x\a\n\c\q\e\p\5\d\6\b\h\g\r\r\u\1\y\0\r\a\r\2\2\x\a\1\2\u\x\v\m\f\8\h\i\k\q\a\3\u\q\i\1\1\5\f\j\d\g\u\x\e\9\j\a\3\e\9\g\h\h\l\i\u\d\s\h\8\8\i\y\y\6\d\o\8\3\w\b\f\1\5\s\9\u\n\r\o\z\a\9\2\2\o\1\5\x\y\d\n\6\c\q\5\k\t\4\d\q\g\l\b\6\m\9\d\u\3\y\m\j\y\5\8\5\7\1\z\y\a\j\b\k\k\2\0\0\f\0\7\8\4\g\4\h\7\6\r\a\u\l\t\i\2\3\r\u\m\x\g\b\z\2\9\c\t\g\i\1\5\e\j\g\r\y\w\9\p\a\e\q\q\q\h\y\p\j\5\8\0\o\e\d\3\j\2\t\4\r\8\z\z\k\z\n\r\7\w\9\s\i\g\t\o\n\g\v\s\d\4\p\8\h\s\u\2\a\q\c\f\t\7\n\o\v\1\s\o\t\p\2\m\5\u\c\1\i\x\7\e\t\9\5\x\6\e\e\k\n\c\e\x\s\s\c\u\u\e\e\4\h\w\9\5\u\9\e\x\8\n\3\3\8\1\1\g\4\g\g\4\j\q\d\x\8\d\0\8\p\r\s\o\n\q\q\n\t\b\2\x\2\4\9\9\0\7\f\y\z\0\r\l\c\j\i\7\u\2\b\s\9\e\2\s\r\r\n\x\m\i\q\s\p\n\x\y\s\o\o\u\8\e\h\o\u\z\l\c\h\p\u\4\e\9\d\y\5\i\z\d\j\u\6\a\b\y\s\o\r\t\r\o\k\l\0\d\l\m\x\c\o\d\f\q\q\y\f\c\u\q\3\4\1\3\3\z\u\3\o\l\7\d\b\x\u\h\t\f\8\3\1\f\i\z\g\i\s\j\5\l\x\m\i\m\h\e\2\a\c\u\4\f\0\s\1\4\e\p\t\3\4\7\q\v\3\2\z\d\u\3\0\o\4\7\e\0\n\0\w\p\2\q\6\l\3\t\l\y\p\o\3\s\7\v\j\1\l\i\w\5\l\g\0\8\s\z\8\2\k\1\2\h\9 ]] 01:04:58.597 11:02:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:04:58.597 11:02:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 01:04:58.597 [2024-07-22 11:02:03.780144] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:58.597 [2024-07-22 11:02:03.780211] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75637 ] 01:04:58.855 [2024-07-22 11:02:03.922466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:58.855 [2024-07-22 11:02:03.970391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:58.855 [2024-07-22 11:02:04.012558] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:59.113  Copying: 512/512 [B] (average 500 kBps) 01:04:59.113 01:04:59.113 11:02:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4ich2xlpxancqep5d6bhgrru1y0rar22xa12uxvmf8hikqa3uqi115fjdguxe9ja3e9ghhliudsh88iyy6do83wbf15s9unroza922o15xydn6cq5kt4dqglb6m9du3ymjy58571zyajbkk200f0784g4h76raulti23rumxgbz29ctgi15ejgryw9paeqqqhypj580oed3j2t4r8zzkznr7w9sigtongvsd4p8hsu2aqcft7nov1sotp2m5uc1ix7et95x6eekncexsscuuee4hw95u9ex8n33811g4gg4jqdx8d08prsonqqntb2x249907fyz0rlcji7u2bs9e2srrnxmiqspnxysoou8ehouzlchpu4e9dy5izdju6abysortrokl0dlmxcodfqqyfcuq34133zu3ol7dbxuhtf831fizgisj5lxmimhe2acu4f0s14ept347qv32zdu30o47e0n0wp2q6l3tlypo3s7vj1liw5lg08sz82k12h9 == \4\i\c\h\2\x\l\p\x\a\n\c\q\e\p\5\d\6\b\h\g\r\r\u\1\y\0\r\a\r\2\2\x\a\1\2\u\x\v\m\f\8\h\i\k\q\a\3\u\q\i\1\1\5\f\j\d\g\u\x\e\9\j\a\3\e\9\g\h\h\l\i\u\d\s\h\8\8\i\y\y\6\d\o\8\3\w\b\f\1\5\s\9\u\n\r\o\z\a\9\2\2\o\1\5\x\y\d\n\6\c\q\5\k\t\4\d\q\g\l\b\6\m\9\d\u\3\y\m\j\y\5\8\5\7\1\z\y\a\j\b\k\k\2\0\0\f\0\7\8\4\g\4\h\7\6\r\a\u\l\t\i\2\3\r\u\m\x\g\b\z\2\9\c\t\g\i\1\5\e\j\g\r\y\w\9\p\a\e\q\q\q\h\y\p\j\5\8\0\o\e\d\3\j\2\t\4\r\8\z\z\k\z\n\r\7\w\9\s\i\g\t\o\n\g\v\s\d\4\p\8\h\s\u\2\a\q\c\f\t\7\n\o\v\1\s\o\t\p\2\m\5\u\c\1\i\x\7\e\t\9\5\x\6\e\e\k\n\c\e\x\s\s\c\u\u\e\e\4\h\w\9\5\u\9\e\x\8\n\3\3\8\1\1\g\4\g\g\4\j\q\d\x\8\d\0\8\p\r\s\o\n\q\q\n\t\b\2\x\2\4\9\9\0\7\f\y\z\0\r\l\c\j\i\7\u\2\b\s\9\e\2\s\r\r\n\x\m\i\q\s\p\n\x\y\s\o\o\u\8\e\h\o\u\z\l\c\h\p\u\4\e\9\d\y\5\i\z\d\j\u\6\a\b\y\s\o\r\t\r\o\k\l\0\d\l\m\x\c\o\d\f\q\q\y\f\c\u\q\3\4\1\3\3\z\u\3\o\l\7\d\b\x\u\h\t\f\8\3\1\f\i\z\g\i\s\j\5\l\x\m\i\m\h\e\2\a\c\u\4\f\0\s\1\4\e\p\t\3\4\7\q\v\3\2\z\d\u\3\0\o\4\7\e\0\n\0\w\p\2\q\6\l\3\t\l\y\p\o\3\s\7\v\j\1\l\i\w\5\l\g\0\8\s\z\8\2\k\1\2\h\9 ]] 01:04:59.113 11:02:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:04:59.113 11:02:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 01:04:59.113 [2024-07-22 11:02:04.247927] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:59.113 [2024-07-22 11:02:04.247992] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75641 ] 01:04:59.371 [2024-07-22 11:02:04.390591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:59.371 [2024-07-22 11:02:04.436307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:59.371 [2024-07-22 11:02:04.479638] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:04:59.629  Copying: 512/512 [B] (average 100 kBps) 01:04:59.629 01:04:59.629 11:02:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4ich2xlpxancqep5d6bhgrru1y0rar22xa12uxvmf8hikqa3uqi115fjdguxe9ja3e9ghhliudsh88iyy6do83wbf15s9unroza922o15xydn6cq5kt4dqglb6m9du3ymjy58571zyajbkk200f0784g4h76raulti23rumxgbz29ctgi15ejgryw9paeqqqhypj580oed3j2t4r8zzkznr7w9sigtongvsd4p8hsu2aqcft7nov1sotp2m5uc1ix7et95x6eekncexsscuuee4hw95u9ex8n33811g4gg4jqdx8d08prsonqqntb2x249907fyz0rlcji7u2bs9e2srrnxmiqspnxysoou8ehouzlchpu4e9dy5izdju6abysortrokl0dlmxcodfqqyfcuq34133zu3ol7dbxuhtf831fizgisj5lxmimhe2acu4f0s14ept347qv32zdu30o47e0n0wp2q6l3tlypo3s7vj1liw5lg08sz82k12h9 == \4\i\c\h\2\x\l\p\x\a\n\c\q\e\p\5\d\6\b\h\g\r\r\u\1\y\0\r\a\r\2\2\x\a\1\2\u\x\v\m\f\8\h\i\k\q\a\3\u\q\i\1\1\5\f\j\d\g\u\x\e\9\j\a\3\e\9\g\h\h\l\i\u\d\s\h\8\8\i\y\y\6\d\o\8\3\w\b\f\1\5\s\9\u\n\r\o\z\a\9\2\2\o\1\5\x\y\d\n\6\c\q\5\k\t\4\d\q\g\l\b\6\m\9\d\u\3\y\m\j\y\5\8\5\7\1\z\y\a\j\b\k\k\2\0\0\f\0\7\8\4\g\4\h\7\6\r\a\u\l\t\i\2\3\r\u\m\x\g\b\z\2\9\c\t\g\i\1\5\e\j\g\r\y\w\9\p\a\e\q\q\q\h\y\p\j\5\8\0\o\e\d\3\j\2\t\4\r\8\z\z\k\z\n\r\7\w\9\s\i\g\t\o\n\g\v\s\d\4\p\8\h\s\u\2\a\q\c\f\t\7\n\o\v\1\s\o\t\p\2\m\5\u\c\1\i\x\7\e\t\9\5\x\6\e\e\k\n\c\e\x\s\s\c\u\u\e\e\4\h\w\9\5\u\9\e\x\8\n\3\3\8\1\1\g\4\g\g\4\j\q\d\x\8\d\0\8\p\r\s\o\n\q\q\n\t\b\2\x\2\4\9\9\0\7\f\y\z\0\r\l\c\j\i\7\u\2\b\s\9\e\2\s\r\r\n\x\m\i\q\s\p\n\x\y\s\o\o\u\8\e\h\o\u\z\l\c\h\p\u\4\e\9\d\y\5\i\z\d\j\u\6\a\b\y\s\o\r\t\r\o\k\l\0\d\l\m\x\c\o\d\f\q\q\y\f\c\u\q\3\4\1\3\3\z\u\3\o\l\7\d\b\x\u\h\t\f\8\3\1\f\i\z\g\i\s\j\5\l\x\m\i\m\h\e\2\a\c\u\4\f\0\s\1\4\e\p\t\3\4\7\q\v\3\2\z\d\u\3\0\o\4\7\e\0\n\0\w\p\2\q\6\l\3\t\l\y\p\o\3\s\7\v\j\1\l\i\w\5\l\g\0\8\s\z\8\2\k\1\2\h\9 ]] 01:04:59.629 11:02:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:04:59.629 11:02:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 01:04:59.629 [2024-07-22 11:02:04.710235] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:04:59.629 [2024-07-22 11:02:04.710307] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75656 ] 01:04:59.888 [2024-07-22 11:02:04.850230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:59.888 [2024-07-22 11:02:04.894917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:04:59.888 [2024-07-22 11:02:04.936938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:00.147  Copying: 512/512 [B] (average 500 kBps) 01:05:00.147 01:05:00.147 11:02:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4ich2xlpxancqep5d6bhgrru1y0rar22xa12uxvmf8hikqa3uqi115fjdguxe9ja3e9ghhliudsh88iyy6do83wbf15s9unroza922o15xydn6cq5kt4dqglb6m9du3ymjy58571zyajbkk200f0784g4h76raulti23rumxgbz29ctgi15ejgryw9paeqqqhypj580oed3j2t4r8zzkznr7w9sigtongvsd4p8hsu2aqcft7nov1sotp2m5uc1ix7et95x6eekncexsscuuee4hw95u9ex8n33811g4gg4jqdx8d08prsonqqntb2x249907fyz0rlcji7u2bs9e2srrnxmiqspnxysoou8ehouzlchpu4e9dy5izdju6abysortrokl0dlmxcodfqqyfcuq34133zu3ol7dbxuhtf831fizgisj5lxmimhe2acu4f0s14ept347qv32zdu30o47e0n0wp2q6l3tlypo3s7vj1liw5lg08sz82k12h9 == \4\i\c\h\2\x\l\p\x\a\n\c\q\e\p\5\d\6\b\h\g\r\r\u\1\y\0\r\a\r\2\2\x\a\1\2\u\x\v\m\f\8\h\i\k\q\a\3\u\q\i\1\1\5\f\j\d\g\u\x\e\9\j\a\3\e\9\g\h\h\l\i\u\d\s\h\8\8\i\y\y\6\d\o\8\3\w\b\f\1\5\s\9\u\n\r\o\z\a\9\2\2\o\1\5\x\y\d\n\6\c\q\5\k\t\4\d\q\g\l\b\6\m\9\d\u\3\y\m\j\y\5\8\5\7\1\z\y\a\j\b\k\k\2\0\0\f\0\7\8\4\g\4\h\7\6\r\a\u\l\t\i\2\3\r\u\m\x\g\b\z\2\9\c\t\g\i\1\5\e\j\g\r\y\w\9\p\a\e\q\q\q\h\y\p\j\5\8\0\o\e\d\3\j\2\t\4\r\8\z\z\k\z\n\r\7\w\9\s\i\g\t\o\n\g\v\s\d\4\p\8\h\s\u\2\a\q\c\f\t\7\n\o\v\1\s\o\t\p\2\m\5\u\c\1\i\x\7\e\t\9\5\x\6\e\e\k\n\c\e\x\s\s\c\u\u\e\e\4\h\w\9\5\u\9\e\x\8\n\3\3\8\1\1\g\4\g\g\4\j\q\d\x\8\d\0\8\p\r\s\o\n\q\q\n\t\b\2\x\2\4\9\9\0\7\f\y\z\0\r\l\c\j\i\7\u\2\b\s\9\e\2\s\r\r\n\x\m\i\q\s\p\n\x\y\s\o\o\u\8\e\h\o\u\z\l\c\h\p\u\4\e\9\d\y\5\i\z\d\j\u\6\a\b\y\s\o\r\t\r\o\k\l\0\d\l\m\x\c\o\d\f\q\q\y\f\c\u\q\3\4\1\3\3\z\u\3\o\l\7\d\b\x\u\h\t\f\8\3\1\f\i\z\g\i\s\j\5\l\x\m\i\m\h\e\2\a\c\u\4\f\0\s\1\4\e\p\t\3\4\7\q\v\3\2\z\d\u\3\0\o\4\7\e\0\n\0\w\p\2\q\6\l\3\t\l\y\p\o\3\s\7\v\j\1\l\i\w\5\l\g\0\8\s\z\8\2\k\1\2\h\9 ]] 01:05:00.147 11:02:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 01:05:00.147 11:02:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 01:05:00.147 11:02:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 01:05:00.147 11:02:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 01:05:00.147 11:02:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:05:00.147 11:02:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 01:05:00.147 [2024-07-22 11:02:05.180484] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:00.147 [2024-07-22 11:02:05.180547] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75660 ] 01:05:00.147 [2024-07-22 11:02:05.321815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:00.406 [2024-07-22 11:02:05.366622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:00.406 [2024-07-22 11:02:05.408144] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:00.406  Copying: 512/512 [B] (average 500 kBps) 01:05:00.406 01:05:00.406 11:02:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ adygowejk9dlwio316sopwls4erynuth4j5kaydlt3x1h4u12nlc111xtsf8hqrek0mtfxsvkt7apwvqz6xwe28qr08pcmcqxnyc280aafwzmfhho2e0jxsb7b0ym6deg80fgy6x6pbvkulmk98jxy6xfce7i1n05e77ge0upjq0l2h66t14chv31d6hyln8w19f9qpwu153i4rgeqke20v6ltg78w1kedpir9puutp9c79h09fntrmthhxqzfrv8uib0h5tw52d9f16ximf3v76czf0fcrogjrsnygjj61dcyphja2ckujc64803hrf862gxf16ies24ae959klwmfytpjafrnvhfzizakpq0gbgyeffrebtx4yj27sbggb25oss1aaijvhch8rzk2naoz7z9non8l90odw4kt374rrfirt4y9xozgb8hpla1b180i6gcatbbc3r4q0aut1pztd3jxramzzz5pidt8j1i68800r7wrq1vm0myqsryh6 == \a\d\y\g\o\w\e\j\k\9\d\l\w\i\o\3\1\6\s\o\p\w\l\s\4\e\r\y\n\u\t\h\4\j\5\k\a\y\d\l\t\3\x\1\h\4\u\1\2\n\l\c\1\1\1\x\t\s\f\8\h\q\r\e\k\0\m\t\f\x\s\v\k\t\7\a\p\w\v\q\z\6\x\w\e\2\8\q\r\0\8\p\c\m\c\q\x\n\y\c\2\8\0\a\a\f\w\z\m\f\h\h\o\2\e\0\j\x\s\b\7\b\0\y\m\6\d\e\g\8\0\f\g\y\6\x\6\p\b\v\k\u\l\m\k\9\8\j\x\y\6\x\f\c\e\7\i\1\n\0\5\e\7\7\g\e\0\u\p\j\q\0\l\2\h\6\6\t\1\4\c\h\v\3\1\d\6\h\y\l\n\8\w\1\9\f\9\q\p\w\u\1\5\3\i\4\r\g\e\q\k\e\2\0\v\6\l\t\g\7\8\w\1\k\e\d\p\i\r\9\p\u\u\t\p\9\c\7\9\h\0\9\f\n\t\r\m\t\h\h\x\q\z\f\r\v\8\u\i\b\0\h\5\t\w\5\2\d\9\f\1\6\x\i\m\f\3\v\7\6\c\z\f\0\f\c\r\o\g\j\r\s\n\y\g\j\j\6\1\d\c\y\p\h\j\a\2\c\k\u\j\c\6\4\8\0\3\h\r\f\8\6\2\g\x\f\1\6\i\e\s\2\4\a\e\9\5\9\k\l\w\m\f\y\t\p\j\a\f\r\n\v\h\f\z\i\z\a\k\p\q\0\g\b\g\y\e\f\f\r\e\b\t\x\4\y\j\2\7\s\b\g\g\b\2\5\o\s\s\1\a\a\i\j\v\h\c\h\8\r\z\k\2\n\a\o\z\7\z\9\n\o\n\8\l\9\0\o\d\w\4\k\t\3\7\4\r\r\f\i\r\t\4\y\9\x\o\z\g\b\8\h\p\l\a\1\b\1\8\0\i\6\g\c\a\t\b\b\c\3\r\4\q\0\a\u\t\1\p\z\t\d\3\j\x\r\a\m\z\z\z\5\p\i\d\t\8\j\1\i\6\8\8\0\0\r\7\w\r\q\1\v\m\0\m\y\q\s\r\y\h\6 ]] 01:05:00.406 11:02:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:05:00.406 11:02:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 01:05:00.668 [2024-07-22 11:02:05.653674] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:00.668 [2024-07-22 11:02:05.653734] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75674 ] 01:05:00.668 [2024-07-22 11:02:05.794487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:00.668 [2024-07-22 11:02:05.834497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:00.927 [2024-07-22 11:02:05.875575] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:00.927  Copying: 512/512 [B] (average 500 kBps) 01:05:00.927 01:05:00.927 11:02:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ adygowejk9dlwio316sopwls4erynuth4j5kaydlt3x1h4u12nlc111xtsf8hqrek0mtfxsvkt7apwvqz6xwe28qr08pcmcqxnyc280aafwzmfhho2e0jxsb7b0ym6deg80fgy6x6pbvkulmk98jxy6xfce7i1n05e77ge0upjq0l2h66t14chv31d6hyln8w19f9qpwu153i4rgeqke20v6ltg78w1kedpir9puutp9c79h09fntrmthhxqzfrv8uib0h5tw52d9f16ximf3v76czf0fcrogjrsnygjj61dcyphja2ckujc64803hrf862gxf16ies24ae959klwmfytpjafrnvhfzizakpq0gbgyeffrebtx4yj27sbggb25oss1aaijvhch8rzk2naoz7z9non8l90odw4kt374rrfirt4y9xozgb8hpla1b180i6gcatbbc3r4q0aut1pztd3jxramzzz5pidt8j1i68800r7wrq1vm0myqsryh6 == \a\d\y\g\o\w\e\j\k\9\d\l\w\i\o\3\1\6\s\o\p\w\l\s\4\e\r\y\n\u\t\h\4\j\5\k\a\y\d\l\t\3\x\1\h\4\u\1\2\n\l\c\1\1\1\x\t\s\f\8\h\q\r\e\k\0\m\t\f\x\s\v\k\t\7\a\p\w\v\q\z\6\x\w\e\2\8\q\r\0\8\p\c\m\c\q\x\n\y\c\2\8\0\a\a\f\w\z\m\f\h\h\o\2\e\0\j\x\s\b\7\b\0\y\m\6\d\e\g\8\0\f\g\y\6\x\6\p\b\v\k\u\l\m\k\9\8\j\x\y\6\x\f\c\e\7\i\1\n\0\5\e\7\7\g\e\0\u\p\j\q\0\l\2\h\6\6\t\1\4\c\h\v\3\1\d\6\h\y\l\n\8\w\1\9\f\9\q\p\w\u\1\5\3\i\4\r\g\e\q\k\e\2\0\v\6\l\t\g\7\8\w\1\k\e\d\p\i\r\9\p\u\u\t\p\9\c\7\9\h\0\9\f\n\t\r\m\t\h\h\x\q\z\f\r\v\8\u\i\b\0\h\5\t\w\5\2\d\9\f\1\6\x\i\m\f\3\v\7\6\c\z\f\0\f\c\r\o\g\j\r\s\n\y\g\j\j\6\1\d\c\y\p\h\j\a\2\c\k\u\j\c\6\4\8\0\3\h\r\f\8\6\2\g\x\f\1\6\i\e\s\2\4\a\e\9\5\9\k\l\w\m\f\y\t\p\j\a\f\r\n\v\h\f\z\i\z\a\k\p\q\0\g\b\g\y\e\f\f\r\e\b\t\x\4\y\j\2\7\s\b\g\g\b\2\5\o\s\s\1\a\a\i\j\v\h\c\h\8\r\z\k\2\n\a\o\z\7\z\9\n\o\n\8\l\9\0\o\d\w\4\k\t\3\7\4\r\r\f\i\r\t\4\y\9\x\o\z\g\b\8\h\p\l\a\1\b\1\8\0\i\6\g\c\a\t\b\b\c\3\r\4\q\0\a\u\t\1\p\z\t\d\3\j\x\r\a\m\z\z\z\5\p\i\d\t\8\j\1\i\6\8\8\0\0\r\7\w\r\q\1\v\m\0\m\y\q\s\r\y\h\6 ]] 01:05:00.927 11:02:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:05:00.927 11:02:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 01:05:00.927 [2024-07-22 11:02:06.110020] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:00.927 [2024-07-22 11:02:06.110084] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75679 ] 01:05:01.186 [2024-07-22 11:02:06.252002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:01.186 [2024-07-22 11:02:06.293073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:01.186 [2024-07-22 11:02:06.333788] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:01.446  Copying: 512/512 [B] (average 250 kBps) 01:05:01.446 01:05:01.446 11:02:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ adygowejk9dlwio316sopwls4erynuth4j5kaydlt3x1h4u12nlc111xtsf8hqrek0mtfxsvkt7apwvqz6xwe28qr08pcmcqxnyc280aafwzmfhho2e0jxsb7b0ym6deg80fgy6x6pbvkulmk98jxy6xfce7i1n05e77ge0upjq0l2h66t14chv31d6hyln8w19f9qpwu153i4rgeqke20v6ltg78w1kedpir9puutp9c79h09fntrmthhxqzfrv8uib0h5tw52d9f16ximf3v76czf0fcrogjrsnygjj61dcyphja2ckujc64803hrf862gxf16ies24ae959klwmfytpjafrnvhfzizakpq0gbgyeffrebtx4yj27sbggb25oss1aaijvhch8rzk2naoz7z9non8l90odw4kt374rrfirt4y9xozgb8hpla1b180i6gcatbbc3r4q0aut1pztd3jxramzzz5pidt8j1i68800r7wrq1vm0myqsryh6 == \a\d\y\g\o\w\e\j\k\9\d\l\w\i\o\3\1\6\s\o\p\w\l\s\4\e\r\y\n\u\t\h\4\j\5\k\a\y\d\l\t\3\x\1\h\4\u\1\2\n\l\c\1\1\1\x\t\s\f\8\h\q\r\e\k\0\m\t\f\x\s\v\k\t\7\a\p\w\v\q\z\6\x\w\e\2\8\q\r\0\8\p\c\m\c\q\x\n\y\c\2\8\0\a\a\f\w\z\m\f\h\h\o\2\e\0\j\x\s\b\7\b\0\y\m\6\d\e\g\8\0\f\g\y\6\x\6\p\b\v\k\u\l\m\k\9\8\j\x\y\6\x\f\c\e\7\i\1\n\0\5\e\7\7\g\e\0\u\p\j\q\0\l\2\h\6\6\t\1\4\c\h\v\3\1\d\6\h\y\l\n\8\w\1\9\f\9\q\p\w\u\1\5\3\i\4\r\g\e\q\k\e\2\0\v\6\l\t\g\7\8\w\1\k\e\d\p\i\r\9\p\u\u\t\p\9\c\7\9\h\0\9\f\n\t\r\m\t\h\h\x\q\z\f\r\v\8\u\i\b\0\h\5\t\w\5\2\d\9\f\1\6\x\i\m\f\3\v\7\6\c\z\f\0\f\c\r\o\g\j\r\s\n\y\g\j\j\6\1\d\c\y\p\h\j\a\2\c\k\u\j\c\6\4\8\0\3\h\r\f\8\6\2\g\x\f\1\6\i\e\s\2\4\a\e\9\5\9\k\l\w\m\f\y\t\p\j\a\f\r\n\v\h\f\z\i\z\a\k\p\q\0\g\b\g\y\e\f\f\r\e\b\t\x\4\y\j\2\7\s\b\g\g\b\2\5\o\s\s\1\a\a\i\j\v\h\c\h\8\r\z\k\2\n\a\o\z\7\z\9\n\o\n\8\l\9\0\o\d\w\4\k\t\3\7\4\r\r\f\i\r\t\4\y\9\x\o\z\g\b\8\h\p\l\a\1\b\1\8\0\i\6\g\c\a\t\b\b\c\3\r\4\q\0\a\u\t\1\p\z\t\d\3\j\x\r\a\m\z\z\z\5\p\i\d\t\8\j\1\i\6\8\8\0\0\r\7\w\r\q\1\v\m\0\m\y\q\s\r\y\h\6 ]] 01:05:01.447 11:02:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:05:01.447 11:02:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 01:05:01.447 [2024-07-22 11:02:06.568720] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:01.447 [2024-07-22 11:02:06.568783] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75689 ] 01:05:01.725 [2024-07-22 11:02:06.709749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:01.725 [2024-07-22 11:02:06.750499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:01.725 [2024-07-22 11:02:06.791166] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:02.001  Copying: 512/512 [B] (average 250 kBps) 01:05:02.002 01:05:02.002 11:02:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ adygowejk9dlwio316sopwls4erynuth4j5kaydlt3x1h4u12nlc111xtsf8hqrek0mtfxsvkt7apwvqz6xwe28qr08pcmcqxnyc280aafwzmfhho2e0jxsb7b0ym6deg80fgy6x6pbvkulmk98jxy6xfce7i1n05e77ge0upjq0l2h66t14chv31d6hyln8w19f9qpwu153i4rgeqke20v6ltg78w1kedpir9puutp9c79h09fntrmthhxqzfrv8uib0h5tw52d9f16ximf3v76czf0fcrogjrsnygjj61dcyphja2ckujc64803hrf862gxf16ies24ae959klwmfytpjafrnvhfzizakpq0gbgyeffrebtx4yj27sbggb25oss1aaijvhch8rzk2naoz7z9non8l90odw4kt374rrfirt4y9xozgb8hpla1b180i6gcatbbc3r4q0aut1pztd3jxramzzz5pidt8j1i68800r7wrq1vm0myqsryh6 == \a\d\y\g\o\w\e\j\k\9\d\l\w\i\o\3\1\6\s\o\p\w\l\s\4\e\r\y\n\u\t\h\4\j\5\k\a\y\d\l\t\3\x\1\h\4\u\1\2\n\l\c\1\1\1\x\t\s\f\8\h\q\r\e\k\0\m\t\f\x\s\v\k\t\7\a\p\w\v\q\z\6\x\w\e\2\8\q\r\0\8\p\c\m\c\q\x\n\y\c\2\8\0\a\a\f\w\z\m\f\h\h\o\2\e\0\j\x\s\b\7\b\0\y\m\6\d\e\g\8\0\f\g\y\6\x\6\p\b\v\k\u\l\m\k\9\8\j\x\y\6\x\f\c\e\7\i\1\n\0\5\e\7\7\g\e\0\u\p\j\q\0\l\2\h\6\6\t\1\4\c\h\v\3\1\d\6\h\y\l\n\8\w\1\9\f\9\q\p\w\u\1\5\3\i\4\r\g\e\q\k\e\2\0\v\6\l\t\g\7\8\w\1\k\e\d\p\i\r\9\p\u\u\t\p\9\c\7\9\h\0\9\f\n\t\r\m\t\h\h\x\q\z\f\r\v\8\u\i\b\0\h\5\t\w\5\2\d\9\f\1\6\x\i\m\f\3\v\7\6\c\z\f\0\f\c\r\o\g\j\r\s\n\y\g\j\j\6\1\d\c\y\p\h\j\a\2\c\k\u\j\c\6\4\8\0\3\h\r\f\8\6\2\g\x\f\1\6\i\e\s\2\4\a\e\9\5\9\k\l\w\m\f\y\t\p\j\a\f\r\n\v\h\f\z\i\z\a\k\p\q\0\g\b\g\y\e\f\f\r\e\b\t\x\4\y\j\2\7\s\b\g\g\b\2\5\o\s\s\1\a\a\i\j\v\h\c\h\8\r\z\k\2\n\a\o\z\7\z\9\n\o\n\8\l\9\0\o\d\w\4\k\t\3\7\4\r\r\f\i\r\t\4\y\9\x\o\z\g\b\8\h\p\l\a\1\b\1\8\0\i\6\g\c\a\t\b\b\c\3\r\4\q\0\a\u\t\1\p\z\t\d\3\j\x\r\a\m\z\z\z\5\p\i\d\t\8\j\1\i\6\8\8\0\0\r\7\w\r\q\1\v\m\0\m\y\q\s\r\y\h\6 ]] 01:05:02.002 01:05:02.002 real 0m3.738s 01:05:02.002 user 0m1.876s 01:05:02.002 sys 0m1.816s 01:05:02.002 11:02:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:02.002 11:02:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 01:05:02.002 ************************************ 01:05:02.002 END TEST dd_flags_misc 01:05:02.002 ************************************ 01:05:02.002 11:02:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 01:05:02.002 11:02:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 01:05:02.002 11:02:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 01:05:02.002 * Second test run, disabling liburing, forcing AIO 01:05:02.002 11:02:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 01:05:02.002 11:02:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 01:05:02.002 11:02:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:05:02.002 11:02:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:02.002 11:02:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 01:05:02.002 ************************************ 01:05:02.002 START TEST dd_flag_append_forced_aio 01:05:02.002 ************************************ 01:05:02.002 11:02:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 01:05:02.002 11:02:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 01:05:02.002 11:02:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 01:05:02.002 11:02:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 01:05:02.002 11:02:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 01:05:02.002 11:02:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 01:05:02.002 11:02:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=z5yk3ubni8gnepl88parn5j4b20amb4y 01:05:02.002 11:02:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 01:05:02.002 11:02:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 01:05:02.002 11:02:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 01:05:02.002 11:02:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=y6uawjthmsqf8trdmy5o6updwxriu0oz 01:05:02.002 11:02:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s z5yk3ubni8gnepl88parn5j4b20amb4y 01:05:02.002 11:02:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s y6uawjthmsqf8trdmy5o6updwxriu0oz 01:05:02.002 11:02:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 01:05:02.002 [2024-07-22 11:02:07.110288] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:02.002 [2024-07-22 11:02:07.110350] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75717 ] 01:05:02.263 [2024-07-22 11:02:07.251938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:02.263 [2024-07-22 11:02:07.296212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:02.263 [2024-07-22 11:02:07.339206] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:02.523  Copying: 32/32 [B] (average 31 kBps) 01:05:02.523 01:05:02.523 11:02:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ y6uawjthmsqf8trdmy5o6updwxriu0ozz5yk3ubni8gnepl88parn5j4b20amb4y == \y\6\u\a\w\j\t\h\m\s\q\f\8\t\r\d\m\y\5\o\6\u\p\d\w\x\r\i\u\0\o\z\z\5\y\k\3\u\b\n\i\8\g\n\e\p\l\8\8\p\a\r\n\5\j\4\b\2\0\a\m\b\4\y ]] 01:05:02.523 01:05:02.523 real 0m0.488s 01:05:02.523 user 0m0.235s 01:05:02.523 sys 0m0.133s 01:05:02.523 11:02:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:02.523 ************************************ 01:05:02.523 END TEST dd_flag_append_forced_aio 01:05:02.523 11:02:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 01:05:02.523 ************************************ 01:05:02.523 11:02:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 01:05:02.523 11:02:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 01:05:02.523 11:02:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:05:02.523 11:02:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:02.523 11:02:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 01:05:02.523 ************************************ 01:05:02.523 START TEST dd_flag_directory_forced_aio 01:05:02.523 ************************************ 01:05:02.523 11:02:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 01:05:02.523 11:02:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:05:02.523 11:02:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 01:05:02.523 11:02:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:05:02.523 11:02:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:02.523 11:02:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:02.523 11:02:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:02.523 11:02:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:02.523 11:02:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:02.523 11:02:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:02.523 11:02:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:02.523 11:02:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:05:02.523 11:02:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:05:02.523 [2024-07-22 11:02:07.666882] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:02.523 [2024-07-22 11:02:07.666947] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75744 ] 01:05:02.782 [2024-07-22 11:02:07.808948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:02.782 [2024-07-22 11:02:07.853156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:02.782 [2024-07-22 11:02:07.893982] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:02.782 [2024-07-22 11:02:07.915838] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 01:05:02.782 [2024-07-22 11:02:07.915897] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 01:05:02.782 [2024-07-22 11:02:07.915909] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:05:03.042 [2024-07-22 11:02:08.005534] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 01:05:03.042 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 01:05:03.042 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:05:03.042 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 01:05:03.042 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 01:05:03.042 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 01:05:03.042 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:05:03.042 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 01:05:03.042 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 01:05:03.042 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 01:05:03.042 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:03.042 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:03.042 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:03.042 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:03.042 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:03.042 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:03.042 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:03.042 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:05:03.042 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 01:05:03.042 [2024-07-22 11:02:08.138021] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:03.042 [2024-07-22 11:02:08.138087] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75753 ] 01:05:03.302 [2024-07-22 11:02:08.279717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:03.302 [2024-07-22 11:02:08.322795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:03.302 [2024-07-22 11:02:08.364063] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:03.302 [2024-07-22 11:02:08.384629] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 01:05:03.302 [2024-07-22 11:02:08.384675] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 01:05:03.302 [2024-07-22 11:02:08.384688] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:05:03.302 [2024-07-22 11:02:08.475020] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 01:05:03.561 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 01:05:03.561 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:05:03.561 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 01:05:03.561 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 01:05:03.561 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 01:05:03.561 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:05:03.561 01:05:03.561 real 0m0.943s 01:05:03.561 user 0m0.457s 01:05:03.561 sys 0m0.277s 01:05:03.561 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:03.561 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 01:05:03.561 ************************************ 01:05:03.561 END TEST dd_flag_directory_forced_aio 01:05:03.561 ************************************ 01:05:03.561 11:02:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 01:05:03.561 11:02:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 01:05:03.561 11:02:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:05:03.561 11:02:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:03.561 11:02:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 01:05:03.561 ************************************ 01:05:03.561 START TEST dd_flag_nofollow_forced_aio 01:05:03.561 ************************************ 01:05:03.561 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 01:05:03.561 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 01:05:03.561 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 01:05:03.561 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 01:05:03.561 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 01:05:03.561 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:05:03.561 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 01:05:03.562 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:05:03.562 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:03.562 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:03.562 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:03.562 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:03.562 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:03.562 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:03.562 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:03.562 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:05:03.562 11:02:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:05:03.562 [2024-07-22 11:02:08.694051] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:03.562 [2024-07-22 11:02:08.694114] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75782 ] 01:05:03.821 [2024-07-22 11:02:08.835409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:03.821 [2024-07-22 11:02:08.878199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:03.821 [2024-07-22 11:02:08.919093] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:03.821 [2024-07-22 11:02:08.939409] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 01:05:03.821 [2024-07-22 11:02:08.939453] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 01:05:03.821 [2024-07-22 11:02:08.939466] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:05:04.088 [2024-07-22 11:02:09.027106] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 01:05:04.088 11:02:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 01:05:04.088 11:02:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:05:04.088 11:02:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 01:05:04.088 11:02:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 01:05:04.088 11:02:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 01:05:04.088 11:02:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:05:04.088 11:02:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 01:05:04.088 11:02:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 01:05:04.088 11:02:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 01:05:04.088 11:02:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:04.088 11:02:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:04.088 11:02:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:04.088 11:02:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:04.088 11:02:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:04.088 11:02:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:04.088 11:02:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:04.088 11:02:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:05:04.088 11:02:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 01:05:04.088 [2024-07-22 11:02:09.163187] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:04.088 [2024-07-22 11:02:09.163253] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75791 ] 01:05:04.348 [2024-07-22 11:02:09.305312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:04.348 [2024-07-22 11:02:09.347206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:04.348 [2024-07-22 11:02:09.387712] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:04.348 [2024-07-22 11:02:09.407928] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 01:05:04.348 [2024-07-22 11:02:09.407971] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 01:05:04.348 [2024-07-22 11:02:09.407984] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:05:04.348 [2024-07-22 11:02:09.495347] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 01:05:04.608 11:02:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 01:05:04.608 11:02:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:05:04.608 11:02:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 01:05:04.608 11:02:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 01:05:04.608 11:02:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 01:05:04.608 11:02:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:05:04.608 11:02:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 01:05:04.608 11:02:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 01:05:04.608 11:02:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 01:05:04.608 11:02:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:05:04.608 [2024-07-22 11:02:09.622036] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:04.608 [2024-07-22 11:02:09.622109] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75799 ] 01:05:04.608 [2024-07-22 11:02:09.762416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:04.608 [2024-07-22 11:02:09.807303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:04.869 [2024-07-22 11:02:09.848572] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:04.869  Copying: 512/512 [B] (average 500 kBps) 01:05:04.869 01:05:04.869 11:02:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ hodkmnarcp0q0l33lyjjnez988ib2pdgkn8j9b3svqui2rn4ugz3nlk5pfjmhmwt3u95w1w3d1pmqyzr52jhi09fpo8y42h2mqqqn9kjahobqa5hyy6ijbuhp7kfav1ywmrxcyi9xoowjbn2b3enngfn4l4amwqxizutoccrnbcc7wezr1adyy3aq73fbtngz6bdgbrt0wqpkbsrys2nm6wt3e9ejquebtpuqh3mzafj08f7eumwiqxy9dcr4j7o6ymvqskg52g9f1dhzgnvug3983m6zwzgo2eg4yzci979oyouqz0s8yglovik9nioifee9kf9qlgzk44nobhizc1hh8gu6kdjf39js4t0vzh0vvigzgistu7wg5yoyn3de8zixxu90oskwxiz01mnk29f342nu6recgi2hrd35r0nezoekcwgrhceo3ebsvgot5dcabzw0t5al23q5eumytaztg7i5dwx8g2bbxacayv6t5sumeeq6f0pnzlqm2gg == \h\o\d\k\m\n\a\r\c\p\0\q\0\l\3\3\l\y\j\j\n\e\z\9\8\8\i\b\2\p\d\g\k\n\8\j\9\b\3\s\v\q\u\i\2\r\n\4\u\g\z\3\n\l\k\5\p\f\j\m\h\m\w\t\3\u\9\5\w\1\w\3\d\1\p\m\q\y\z\r\5\2\j\h\i\0\9\f\p\o\8\y\4\2\h\2\m\q\q\q\n\9\k\j\a\h\o\b\q\a\5\h\y\y\6\i\j\b\u\h\p\7\k\f\a\v\1\y\w\m\r\x\c\y\i\9\x\o\o\w\j\b\n\2\b\3\e\n\n\g\f\n\4\l\4\a\m\w\q\x\i\z\u\t\o\c\c\r\n\b\c\c\7\w\e\z\r\1\a\d\y\y\3\a\q\7\3\f\b\t\n\g\z\6\b\d\g\b\r\t\0\w\q\p\k\b\s\r\y\s\2\n\m\6\w\t\3\e\9\e\j\q\u\e\b\t\p\u\q\h\3\m\z\a\f\j\0\8\f\7\e\u\m\w\i\q\x\y\9\d\c\r\4\j\7\o\6\y\m\v\q\s\k\g\5\2\g\9\f\1\d\h\z\g\n\v\u\g\3\9\8\3\m\6\z\w\z\g\o\2\e\g\4\y\z\c\i\9\7\9\o\y\o\u\q\z\0\s\8\y\g\l\o\v\i\k\9\n\i\o\i\f\e\e\9\k\f\9\q\l\g\z\k\4\4\n\o\b\h\i\z\c\1\h\h\8\g\u\6\k\d\j\f\3\9\j\s\4\t\0\v\z\h\0\v\v\i\g\z\g\i\s\t\u\7\w\g\5\y\o\y\n\3\d\e\8\z\i\x\x\u\9\0\o\s\k\w\x\i\z\0\1\m\n\k\2\9\f\3\4\2\n\u\6\r\e\c\g\i\2\h\r\d\3\5\r\0\n\e\z\o\e\k\c\w\g\r\h\c\e\o\3\e\b\s\v\g\o\t\5\d\c\a\b\z\w\0\t\5\a\l\2\3\q\5\e\u\m\y\t\a\z\t\g\7\i\5\d\w\x\8\g\2\b\b\x\a\c\a\y\v\6\t\5\s\u\m\e\e\q\6\f\0\p\n\z\l\q\m\2\g\g ]] 01:05:04.869 01:05:04.869 real 0m1.431s 01:05:04.869 user 0m0.699s 01:05:04.869 sys 0m0.400s 01:05:04.869 11:02:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:04.869 ************************************ 01:05:04.869 END TEST dd_flag_nofollow_forced_aio 01:05:04.869 11:02:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 01:05:04.869 ************************************ 01:05:05.128 11:02:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 01:05:05.128 11:02:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 01:05:05.128 11:02:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:05:05.128 11:02:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:05.128 11:02:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 01:05:05.128 ************************************ 01:05:05.128 START TEST dd_flag_noatime_forced_aio 01:05:05.128 ************************************ 01:05:05.128 11:02:10 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 01:05:05.128 11:02:10 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 01:05:05.128 11:02:10 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 01:05:05.128 11:02:10 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 01:05:05.128 11:02:10 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 01:05:05.128 11:02:10 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 01:05:05.128 11:02:10 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:05:05.128 11:02:10 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721646129 01:05:05.128 11:02:10 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:05:05.128 11:02:10 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721646130 01:05:05.128 11:02:10 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 01:05:06.065 11:02:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:05:06.065 [2024-07-22 11:02:11.219949] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:06.065 [2024-07-22 11:02:11.220018] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75839 ] 01:05:06.324 [2024-07-22 11:02:11.362274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:06.324 [2024-07-22 11:02:11.405641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:06.324 [2024-07-22 11:02:11.447245] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:06.581  Copying: 512/512 [B] (average 500 kBps) 01:05:06.581 01:05:06.581 11:02:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:05:06.581 11:02:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721646129 )) 01:05:06.581 11:02:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:05:06.581 11:02:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721646130 )) 01:05:06.581 11:02:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:05:06.581 [2024-07-22 11:02:11.702752] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:06.581 [2024-07-22 11:02:11.702819] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75851 ] 01:05:06.839 [2024-07-22 11:02:11.842168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:06.839 [2024-07-22 11:02:11.895818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:06.839 [2024-07-22 11:02:11.936467] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:07.097  Copying: 512/512 [B] (average 500 kBps) 01:05:07.097 01:05:07.097 11:02:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:05:07.097 11:02:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721646131 )) 01:05:07.097 01:05:07.097 real 0m2.014s 01:05:07.097 user 0m0.483s 01:05:07.097 sys 0m0.291s 01:05:07.097 11:02:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:07.097 11:02:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 01:05:07.097 ************************************ 01:05:07.097 END TEST dd_flag_noatime_forced_aio 01:05:07.097 ************************************ 01:05:07.097 11:02:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 01:05:07.097 11:02:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 01:05:07.097 11:02:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:05:07.097 11:02:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:07.097 11:02:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 01:05:07.097 ************************************ 01:05:07.097 START TEST dd_flags_misc_forced_aio 01:05:07.097 ************************************ 01:05:07.097 11:02:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 01:05:07.097 11:02:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 01:05:07.097 11:02:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 01:05:07.097 11:02:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 01:05:07.097 11:02:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 01:05:07.097 11:02:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 01:05:07.097 11:02:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 01:05:07.097 11:02:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 01:05:07.097 11:02:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:05:07.097 11:02:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 01:05:07.098 [2024-07-22 11:02:12.289926] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:07.098 [2024-07-22 11:02:12.289992] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75877 ] 01:05:07.357 [2024-07-22 11:02:12.431609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:07.357 [2024-07-22 11:02:12.477091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:07.357 [2024-07-22 11:02:12.517842] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:07.616  Copying: 512/512 [B] (average 500 kBps) 01:05:07.616 01:05:07.616 11:02:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ v73082unlhufmj9jpndbsj7092eg7shz5nfkea0k0ntfso2uny1dknojuhkgs1fvj4uhna6l31veimvdw0je5jf2cz5b1lp3d74xs1o93v0v5j1vb0ij3x0aydhpfhdgrq3hqogv988z3yz1g3yuzff9xi6drmuevx5y4o2hjm4ecz651c1nx1kz7wzminwzn9rx6lmep594et4o31cv8vtdqqgxco83q5l6d1lkri4c3t5n8k6x56b2oilh3ovt6fn03q5qq8pp6eq7c1ln6giqqju0ozsmidza7fpfafbx5rq0bijjqil6aeb2jbdkly2h0lc5ldt4vokrvr4tak9to3klquegy0d4bpt4sak9wlzybqvgwlmp3pphopmla1yaxu9hqbm6pvbn5om71gcs9j1ss52c7zfqtml44nml04akwry3ye8hhxrwk248hrre3svwpnxe88uwjqq3suz149g868xfbnughu3s56t50l0vdeep8kyppjgkngpo == \v\7\3\0\8\2\u\n\l\h\u\f\m\j\9\j\p\n\d\b\s\j\7\0\9\2\e\g\7\s\h\z\5\n\f\k\e\a\0\k\0\n\t\f\s\o\2\u\n\y\1\d\k\n\o\j\u\h\k\g\s\1\f\v\j\4\u\h\n\a\6\l\3\1\v\e\i\m\v\d\w\0\j\e\5\j\f\2\c\z\5\b\1\l\p\3\d\7\4\x\s\1\o\9\3\v\0\v\5\j\1\v\b\0\i\j\3\x\0\a\y\d\h\p\f\h\d\g\r\q\3\h\q\o\g\v\9\8\8\z\3\y\z\1\g\3\y\u\z\f\f\9\x\i\6\d\r\m\u\e\v\x\5\y\4\o\2\h\j\m\4\e\c\z\6\5\1\c\1\n\x\1\k\z\7\w\z\m\i\n\w\z\n\9\r\x\6\l\m\e\p\5\9\4\e\t\4\o\3\1\c\v\8\v\t\d\q\q\g\x\c\o\8\3\q\5\l\6\d\1\l\k\r\i\4\c\3\t\5\n\8\k\6\x\5\6\b\2\o\i\l\h\3\o\v\t\6\f\n\0\3\q\5\q\q\8\p\p\6\e\q\7\c\1\l\n\6\g\i\q\q\j\u\0\o\z\s\m\i\d\z\a\7\f\p\f\a\f\b\x\5\r\q\0\b\i\j\j\q\i\l\6\a\e\b\2\j\b\d\k\l\y\2\h\0\l\c\5\l\d\t\4\v\o\k\r\v\r\4\t\a\k\9\t\o\3\k\l\q\u\e\g\y\0\d\4\b\p\t\4\s\a\k\9\w\l\z\y\b\q\v\g\w\l\m\p\3\p\p\h\o\p\m\l\a\1\y\a\x\u\9\h\q\b\m\6\p\v\b\n\5\o\m\7\1\g\c\s\9\j\1\s\s\5\2\c\7\z\f\q\t\m\l\4\4\n\m\l\0\4\a\k\w\r\y\3\y\e\8\h\h\x\r\w\k\2\4\8\h\r\r\e\3\s\v\w\p\n\x\e\8\8\u\w\j\q\q\3\s\u\z\1\4\9\g\8\6\8\x\f\b\n\u\g\h\u\3\s\5\6\t\5\0\l\0\v\d\e\e\p\8\k\y\p\p\j\g\k\n\g\p\o ]] 01:05:07.616 11:02:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:05:07.616 11:02:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 01:05:07.616 [2024-07-22 11:02:12.751274] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:07.616 [2024-07-22 11:02:12.751342] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75885 ] 01:05:07.876 [2024-07-22 11:02:12.893016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:07.876 [2024-07-22 11:02:12.936087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:07.876 [2024-07-22 11:02:12.977081] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:08.134  Copying: 512/512 [B] (average 500 kBps) 01:05:08.134 01:05:08.134 11:02:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ v73082unlhufmj9jpndbsj7092eg7shz5nfkea0k0ntfso2uny1dknojuhkgs1fvj4uhna6l31veimvdw0je5jf2cz5b1lp3d74xs1o93v0v5j1vb0ij3x0aydhpfhdgrq3hqogv988z3yz1g3yuzff9xi6drmuevx5y4o2hjm4ecz651c1nx1kz7wzminwzn9rx6lmep594et4o31cv8vtdqqgxco83q5l6d1lkri4c3t5n8k6x56b2oilh3ovt6fn03q5qq8pp6eq7c1ln6giqqju0ozsmidza7fpfafbx5rq0bijjqil6aeb2jbdkly2h0lc5ldt4vokrvr4tak9to3klquegy0d4bpt4sak9wlzybqvgwlmp3pphopmla1yaxu9hqbm6pvbn5om71gcs9j1ss52c7zfqtml44nml04akwry3ye8hhxrwk248hrre3svwpnxe88uwjqq3suz149g868xfbnughu3s56t50l0vdeep8kyppjgkngpo == \v\7\3\0\8\2\u\n\l\h\u\f\m\j\9\j\p\n\d\b\s\j\7\0\9\2\e\g\7\s\h\z\5\n\f\k\e\a\0\k\0\n\t\f\s\o\2\u\n\y\1\d\k\n\o\j\u\h\k\g\s\1\f\v\j\4\u\h\n\a\6\l\3\1\v\e\i\m\v\d\w\0\j\e\5\j\f\2\c\z\5\b\1\l\p\3\d\7\4\x\s\1\o\9\3\v\0\v\5\j\1\v\b\0\i\j\3\x\0\a\y\d\h\p\f\h\d\g\r\q\3\h\q\o\g\v\9\8\8\z\3\y\z\1\g\3\y\u\z\f\f\9\x\i\6\d\r\m\u\e\v\x\5\y\4\o\2\h\j\m\4\e\c\z\6\5\1\c\1\n\x\1\k\z\7\w\z\m\i\n\w\z\n\9\r\x\6\l\m\e\p\5\9\4\e\t\4\o\3\1\c\v\8\v\t\d\q\q\g\x\c\o\8\3\q\5\l\6\d\1\l\k\r\i\4\c\3\t\5\n\8\k\6\x\5\6\b\2\o\i\l\h\3\o\v\t\6\f\n\0\3\q\5\q\q\8\p\p\6\e\q\7\c\1\l\n\6\g\i\q\q\j\u\0\o\z\s\m\i\d\z\a\7\f\p\f\a\f\b\x\5\r\q\0\b\i\j\j\q\i\l\6\a\e\b\2\j\b\d\k\l\y\2\h\0\l\c\5\l\d\t\4\v\o\k\r\v\r\4\t\a\k\9\t\o\3\k\l\q\u\e\g\y\0\d\4\b\p\t\4\s\a\k\9\w\l\z\y\b\q\v\g\w\l\m\p\3\p\p\h\o\p\m\l\a\1\y\a\x\u\9\h\q\b\m\6\p\v\b\n\5\o\m\7\1\g\c\s\9\j\1\s\s\5\2\c\7\z\f\q\t\m\l\4\4\n\m\l\0\4\a\k\w\r\y\3\y\e\8\h\h\x\r\w\k\2\4\8\h\r\r\e\3\s\v\w\p\n\x\e\8\8\u\w\j\q\q\3\s\u\z\1\4\9\g\8\6\8\x\f\b\n\u\g\h\u\3\s\5\6\t\5\0\l\0\v\d\e\e\p\8\k\y\p\p\j\g\k\n\g\p\o ]] 01:05:08.134 11:02:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:05:08.134 11:02:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 01:05:08.134 [2024-07-22 11:02:13.236279] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:08.134 [2024-07-22 11:02:13.236345] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75892 ] 01:05:08.391 [2024-07-22 11:02:13.377229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:08.391 [2024-07-22 11:02:13.419637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:08.391 [2024-07-22 11:02:13.460259] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:08.648  Copying: 512/512 [B] (average 166 kBps) 01:05:08.648 01:05:08.648 11:02:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ v73082unlhufmj9jpndbsj7092eg7shz5nfkea0k0ntfso2uny1dknojuhkgs1fvj4uhna6l31veimvdw0je5jf2cz5b1lp3d74xs1o93v0v5j1vb0ij3x0aydhpfhdgrq3hqogv988z3yz1g3yuzff9xi6drmuevx5y4o2hjm4ecz651c1nx1kz7wzminwzn9rx6lmep594et4o31cv8vtdqqgxco83q5l6d1lkri4c3t5n8k6x56b2oilh3ovt6fn03q5qq8pp6eq7c1ln6giqqju0ozsmidza7fpfafbx5rq0bijjqil6aeb2jbdkly2h0lc5ldt4vokrvr4tak9to3klquegy0d4bpt4sak9wlzybqvgwlmp3pphopmla1yaxu9hqbm6pvbn5om71gcs9j1ss52c7zfqtml44nml04akwry3ye8hhxrwk248hrre3svwpnxe88uwjqq3suz149g868xfbnughu3s56t50l0vdeep8kyppjgkngpo == \v\7\3\0\8\2\u\n\l\h\u\f\m\j\9\j\p\n\d\b\s\j\7\0\9\2\e\g\7\s\h\z\5\n\f\k\e\a\0\k\0\n\t\f\s\o\2\u\n\y\1\d\k\n\o\j\u\h\k\g\s\1\f\v\j\4\u\h\n\a\6\l\3\1\v\e\i\m\v\d\w\0\j\e\5\j\f\2\c\z\5\b\1\l\p\3\d\7\4\x\s\1\o\9\3\v\0\v\5\j\1\v\b\0\i\j\3\x\0\a\y\d\h\p\f\h\d\g\r\q\3\h\q\o\g\v\9\8\8\z\3\y\z\1\g\3\y\u\z\f\f\9\x\i\6\d\r\m\u\e\v\x\5\y\4\o\2\h\j\m\4\e\c\z\6\5\1\c\1\n\x\1\k\z\7\w\z\m\i\n\w\z\n\9\r\x\6\l\m\e\p\5\9\4\e\t\4\o\3\1\c\v\8\v\t\d\q\q\g\x\c\o\8\3\q\5\l\6\d\1\l\k\r\i\4\c\3\t\5\n\8\k\6\x\5\6\b\2\o\i\l\h\3\o\v\t\6\f\n\0\3\q\5\q\q\8\p\p\6\e\q\7\c\1\l\n\6\g\i\q\q\j\u\0\o\z\s\m\i\d\z\a\7\f\p\f\a\f\b\x\5\r\q\0\b\i\j\j\q\i\l\6\a\e\b\2\j\b\d\k\l\y\2\h\0\l\c\5\l\d\t\4\v\o\k\r\v\r\4\t\a\k\9\t\o\3\k\l\q\u\e\g\y\0\d\4\b\p\t\4\s\a\k\9\w\l\z\y\b\q\v\g\w\l\m\p\3\p\p\h\o\p\m\l\a\1\y\a\x\u\9\h\q\b\m\6\p\v\b\n\5\o\m\7\1\g\c\s\9\j\1\s\s\5\2\c\7\z\f\q\t\m\l\4\4\n\m\l\0\4\a\k\w\r\y\3\y\e\8\h\h\x\r\w\k\2\4\8\h\r\r\e\3\s\v\w\p\n\x\e\8\8\u\w\j\q\q\3\s\u\z\1\4\9\g\8\6\8\x\f\b\n\u\g\h\u\3\s\5\6\t\5\0\l\0\v\d\e\e\p\8\k\y\p\p\j\g\k\n\g\p\o ]] 01:05:08.649 11:02:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:05:08.649 11:02:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 01:05:08.649 [2024-07-22 11:02:13.701148] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:08.649 [2024-07-22 11:02:13.701242] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75894 ] 01:05:08.649 [2024-07-22 11:02:13.843686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:08.907 [2024-07-22 11:02:13.887935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:08.907 [2024-07-22 11:02:13.929671] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:09.165  Copying: 512/512 [B] (average 500 kBps) 01:05:09.165 01:05:09.165 11:02:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ v73082unlhufmj9jpndbsj7092eg7shz5nfkea0k0ntfso2uny1dknojuhkgs1fvj4uhna6l31veimvdw0je5jf2cz5b1lp3d74xs1o93v0v5j1vb0ij3x0aydhpfhdgrq3hqogv988z3yz1g3yuzff9xi6drmuevx5y4o2hjm4ecz651c1nx1kz7wzminwzn9rx6lmep594et4o31cv8vtdqqgxco83q5l6d1lkri4c3t5n8k6x56b2oilh3ovt6fn03q5qq8pp6eq7c1ln6giqqju0ozsmidza7fpfafbx5rq0bijjqil6aeb2jbdkly2h0lc5ldt4vokrvr4tak9to3klquegy0d4bpt4sak9wlzybqvgwlmp3pphopmla1yaxu9hqbm6pvbn5om71gcs9j1ss52c7zfqtml44nml04akwry3ye8hhxrwk248hrre3svwpnxe88uwjqq3suz149g868xfbnughu3s56t50l0vdeep8kyppjgkngpo == \v\7\3\0\8\2\u\n\l\h\u\f\m\j\9\j\p\n\d\b\s\j\7\0\9\2\e\g\7\s\h\z\5\n\f\k\e\a\0\k\0\n\t\f\s\o\2\u\n\y\1\d\k\n\o\j\u\h\k\g\s\1\f\v\j\4\u\h\n\a\6\l\3\1\v\e\i\m\v\d\w\0\j\e\5\j\f\2\c\z\5\b\1\l\p\3\d\7\4\x\s\1\o\9\3\v\0\v\5\j\1\v\b\0\i\j\3\x\0\a\y\d\h\p\f\h\d\g\r\q\3\h\q\o\g\v\9\8\8\z\3\y\z\1\g\3\y\u\z\f\f\9\x\i\6\d\r\m\u\e\v\x\5\y\4\o\2\h\j\m\4\e\c\z\6\5\1\c\1\n\x\1\k\z\7\w\z\m\i\n\w\z\n\9\r\x\6\l\m\e\p\5\9\4\e\t\4\o\3\1\c\v\8\v\t\d\q\q\g\x\c\o\8\3\q\5\l\6\d\1\l\k\r\i\4\c\3\t\5\n\8\k\6\x\5\6\b\2\o\i\l\h\3\o\v\t\6\f\n\0\3\q\5\q\q\8\p\p\6\e\q\7\c\1\l\n\6\g\i\q\q\j\u\0\o\z\s\m\i\d\z\a\7\f\p\f\a\f\b\x\5\r\q\0\b\i\j\j\q\i\l\6\a\e\b\2\j\b\d\k\l\y\2\h\0\l\c\5\l\d\t\4\v\o\k\r\v\r\4\t\a\k\9\t\o\3\k\l\q\u\e\g\y\0\d\4\b\p\t\4\s\a\k\9\w\l\z\y\b\q\v\g\w\l\m\p\3\p\p\h\o\p\m\l\a\1\y\a\x\u\9\h\q\b\m\6\p\v\b\n\5\o\m\7\1\g\c\s\9\j\1\s\s\5\2\c\7\z\f\q\t\m\l\4\4\n\m\l\0\4\a\k\w\r\y\3\y\e\8\h\h\x\r\w\k\2\4\8\h\r\r\e\3\s\v\w\p\n\x\e\8\8\u\w\j\q\q\3\s\u\z\1\4\9\g\8\6\8\x\f\b\n\u\g\h\u\3\s\5\6\t\5\0\l\0\v\d\e\e\p\8\k\y\p\p\j\g\k\n\g\p\o ]] 01:05:09.165 11:02:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 01:05:09.165 11:02:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 01:05:09.165 11:02:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 01:05:09.165 11:02:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 01:05:09.165 11:02:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:05:09.165 11:02:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 01:05:09.165 [2024-07-22 11:02:14.185664] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:09.165 [2024-07-22 11:02:14.185732] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75906 ] 01:05:09.165 [2024-07-22 11:02:14.328165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:09.423 [2024-07-22 11:02:14.371424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:09.423 [2024-07-22 11:02:14.412049] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:09.423  Copying: 512/512 [B] (average 500 kBps) 01:05:09.423 01:05:09.423 11:02:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0v4g2fk9zw3d629xa3cxjvuc0mkuvoi09pyx27izyxa42jv9egrbnnxevsoz2av2zg66f113luu8oyjqi3dguyenadhxab1h53sl9hvsedcec73q6nqo9raxwb4234o9czr9zxpwzx4mpmwmyxin8ns0yueywaf6mnc6q521vd9v3li487smhnyi1ytyjdtdtnxvkx219fkmjul32udrq7ofc7hpp10kk9881p7ppm4p08168a77mc8ja8r1acq6lyaqb7bzfyx442qomt69lx2w1t2h45xqjbultcdxwf6wnaynp7ibzekw5skjizd2heynndx4j9t0x82reor6wo9uq6jz3gn89xmbx86w2zvvrh3decq9m3uxp3f30x8yit27l5wiyg3lnm1piokj0p0g1g506y18jq8xq5g6eotlf3m25rfzn45dcs65f9rhgku5nhfyhqenzpdpw4z73jjggburcjvmz8gixbxnoxexzsi97kjx2o2zvw58kbm6 == \0\v\4\g\2\f\k\9\z\w\3\d\6\2\9\x\a\3\c\x\j\v\u\c\0\m\k\u\v\o\i\0\9\p\y\x\2\7\i\z\y\x\a\4\2\j\v\9\e\g\r\b\n\n\x\e\v\s\o\z\2\a\v\2\z\g\6\6\f\1\1\3\l\u\u\8\o\y\j\q\i\3\d\g\u\y\e\n\a\d\h\x\a\b\1\h\5\3\s\l\9\h\v\s\e\d\c\e\c\7\3\q\6\n\q\o\9\r\a\x\w\b\4\2\3\4\o\9\c\z\r\9\z\x\p\w\z\x\4\m\p\m\w\m\y\x\i\n\8\n\s\0\y\u\e\y\w\a\f\6\m\n\c\6\q\5\2\1\v\d\9\v\3\l\i\4\8\7\s\m\h\n\y\i\1\y\t\y\j\d\t\d\t\n\x\v\k\x\2\1\9\f\k\m\j\u\l\3\2\u\d\r\q\7\o\f\c\7\h\p\p\1\0\k\k\9\8\8\1\p\7\p\p\m\4\p\0\8\1\6\8\a\7\7\m\c\8\j\a\8\r\1\a\c\q\6\l\y\a\q\b\7\b\z\f\y\x\4\4\2\q\o\m\t\6\9\l\x\2\w\1\t\2\h\4\5\x\q\j\b\u\l\t\c\d\x\w\f\6\w\n\a\y\n\p\7\i\b\z\e\k\w\5\s\k\j\i\z\d\2\h\e\y\n\n\d\x\4\j\9\t\0\x\8\2\r\e\o\r\6\w\o\9\u\q\6\j\z\3\g\n\8\9\x\m\b\x\8\6\w\2\z\v\v\r\h\3\d\e\c\q\9\m\3\u\x\p\3\f\3\0\x\8\y\i\t\2\7\l\5\w\i\y\g\3\l\n\m\1\p\i\o\k\j\0\p\0\g\1\g\5\0\6\y\1\8\j\q\8\x\q\5\g\6\e\o\t\l\f\3\m\2\5\r\f\z\n\4\5\d\c\s\6\5\f\9\r\h\g\k\u\5\n\h\f\y\h\q\e\n\z\p\d\p\w\4\z\7\3\j\j\g\g\b\u\r\c\j\v\m\z\8\g\i\x\b\x\n\o\x\e\x\z\s\i\9\7\k\j\x\2\o\2\z\v\w\5\8\k\b\m\6 ]] 01:05:09.423 11:02:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:05:09.423 11:02:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 01:05:09.681 [2024-07-22 11:02:14.667394] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:09.681 [2024-07-22 11:02:14.667465] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75909 ] 01:05:09.681 [2024-07-22 11:02:14.809422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:09.681 [2024-07-22 11:02:14.854675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:09.939 [2024-07-22 11:02:14.895207] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:09.939  Copying: 512/512 [B] (average 500 kBps) 01:05:09.939 01:05:09.940 11:02:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0v4g2fk9zw3d629xa3cxjvuc0mkuvoi09pyx27izyxa42jv9egrbnnxevsoz2av2zg66f113luu8oyjqi3dguyenadhxab1h53sl9hvsedcec73q6nqo9raxwb4234o9czr9zxpwzx4mpmwmyxin8ns0yueywaf6mnc6q521vd9v3li487smhnyi1ytyjdtdtnxvkx219fkmjul32udrq7ofc7hpp10kk9881p7ppm4p08168a77mc8ja8r1acq6lyaqb7bzfyx442qomt69lx2w1t2h45xqjbultcdxwf6wnaynp7ibzekw5skjizd2heynndx4j9t0x82reor6wo9uq6jz3gn89xmbx86w2zvvrh3decq9m3uxp3f30x8yit27l5wiyg3lnm1piokj0p0g1g506y18jq8xq5g6eotlf3m25rfzn45dcs65f9rhgku5nhfyhqenzpdpw4z73jjggburcjvmz8gixbxnoxexzsi97kjx2o2zvw58kbm6 == \0\v\4\g\2\f\k\9\z\w\3\d\6\2\9\x\a\3\c\x\j\v\u\c\0\m\k\u\v\o\i\0\9\p\y\x\2\7\i\z\y\x\a\4\2\j\v\9\e\g\r\b\n\n\x\e\v\s\o\z\2\a\v\2\z\g\6\6\f\1\1\3\l\u\u\8\o\y\j\q\i\3\d\g\u\y\e\n\a\d\h\x\a\b\1\h\5\3\s\l\9\h\v\s\e\d\c\e\c\7\3\q\6\n\q\o\9\r\a\x\w\b\4\2\3\4\o\9\c\z\r\9\z\x\p\w\z\x\4\m\p\m\w\m\y\x\i\n\8\n\s\0\y\u\e\y\w\a\f\6\m\n\c\6\q\5\2\1\v\d\9\v\3\l\i\4\8\7\s\m\h\n\y\i\1\y\t\y\j\d\t\d\t\n\x\v\k\x\2\1\9\f\k\m\j\u\l\3\2\u\d\r\q\7\o\f\c\7\h\p\p\1\0\k\k\9\8\8\1\p\7\p\p\m\4\p\0\8\1\6\8\a\7\7\m\c\8\j\a\8\r\1\a\c\q\6\l\y\a\q\b\7\b\z\f\y\x\4\4\2\q\o\m\t\6\9\l\x\2\w\1\t\2\h\4\5\x\q\j\b\u\l\t\c\d\x\w\f\6\w\n\a\y\n\p\7\i\b\z\e\k\w\5\s\k\j\i\z\d\2\h\e\y\n\n\d\x\4\j\9\t\0\x\8\2\r\e\o\r\6\w\o\9\u\q\6\j\z\3\g\n\8\9\x\m\b\x\8\6\w\2\z\v\v\r\h\3\d\e\c\q\9\m\3\u\x\p\3\f\3\0\x\8\y\i\t\2\7\l\5\w\i\y\g\3\l\n\m\1\p\i\o\k\j\0\p\0\g\1\g\5\0\6\y\1\8\j\q\8\x\q\5\g\6\e\o\t\l\f\3\m\2\5\r\f\z\n\4\5\d\c\s\6\5\f\9\r\h\g\k\u\5\n\h\f\y\h\q\e\n\z\p\d\p\w\4\z\7\3\j\j\g\g\b\u\r\c\j\v\m\z\8\g\i\x\b\x\n\o\x\e\x\z\s\i\9\7\k\j\x\2\o\2\z\v\w\5\8\k\b\m\6 ]] 01:05:09.940 11:02:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:05:09.940 11:02:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 01:05:09.940 [2024-07-22 11:02:15.129434] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:09.940 [2024-07-22 11:02:15.129502] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75917 ] 01:05:10.198 [2024-07-22 11:02:15.269201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:10.198 [2024-07-22 11:02:15.310556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:10.198 [2024-07-22 11:02:15.351320] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:10.455  Copying: 512/512 [B] (average 250 kBps) 01:05:10.455 01:05:10.455 11:02:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0v4g2fk9zw3d629xa3cxjvuc0mkuvoi09pyx27izyxa42jv9egrbnnxevsoz2av2zg66f113luu8oyjqi3dguyenadhxab1h53sl9hvsedcec73q6nqo9raxwb4234o9czr9zxpwzx4mpmwmyxin8ns0yueywaf6mnc6q521vd9v3li487smhnyi1ytyjdtdtnxvkx219fkmjul32udrq7ofc7hpp10kk9881p7ppm4p08168a77mc8ja8r1acq6lyaqb7bzfyx442qomt69lx2w1t2h45xqjbultcdxwf6wnaynp7ibzekw5skjizd2heynndx4j9t0x82reor6wo9uq6jz3gn89xmbx86w2zvvrh3decq9m3uxp3f30x8yit27l5wiyg3lnm1piokj0p0g1g506y18jq8xq5g6eotlf3m25rfzn45dcs65f9rhgku5nhfyhqenzpdpw4z73jjggburcjvmz8gixbxnoxexzsi97kjx2o2zvw58kbm6 == \0\v\4\g\2\f\k\9\z\w\3\d\6\2\9\x\a\3\c\x\j\v\u\c\0\m\k\u\v\o\i\0\9\p\y\x\2\7\i\z\y\x\a\4\2\j\v\9\e\g\r\b\n\n\x\e\v\s\o\z\2\a\v\2\z\g\6\6\f\1\1\3\l\u\u\8\o\y\j\q\i\3\d\g\u\y\e\n\a\d\h\x\a\b\1\h\5\3\s\l\9\h\v\s\e\d\c\e\c\7\3\q\6\n\q\o\9\r\a\x\w\b\4\2\3\4\o\9\c\z\r\9\z\x\p\w\z\x\4\m\p\m\w\m\y\x\i\n\8\n\s\0\y\u\e\y\w\a\f\6\m\n\c\6\q\5\2\1\v\d\9\v\3\l\i\4\8\7\s\m\h\n\y\i\1\y\t\y\j\d\t\d\t\n\x\v\k\x\2\1\9\f\k\m\j\u\l\3\2\u\d\r\q\7\o\f\c\7\h\p\p\1\0\k\k\9\8\8\1\p\7\p\p\m\4\p\0\8\1\6\8\a\7\7\m\c\8\j\a\8\r\1\a\c\q\6\l\y\a\q\b\7\b\z\f\y\x\4\4\2\q\o\m\t\6\9\l\x\2\w\1\t\2\h\4\5\x\q\j\b\u\l\t\c\d\x\w\f\6\w\n\a\y\n\p\7\i\b\z\e\k\w\5\s\k\j\i\z\d\2\h\e\y\n\n\d\x\4\j\9\t\0\x\8\2\r\e\o\r\6\w\o\9\u\q\6\j\z\3\g\n\8\9\x\m\b\x\8\6\w\2\z\v\v\r\h\3\d\e\c\q\9\m\3\u\x\p\3\f\3\0\x\8\y\i\t\2\7\l\5\w\i\y\g\3\l\n\m\1\p\i\o\k\j\0\p\0\g\1\g\5\0\6\y\1\8\j\q\8\x\q\5\g\6\e\o\t\l\f\3\m\2\5\r\f\z\n\4\5\d\c\s\6\5\f\9\r\h\g\k\u\5\n\h\f\y\h\q\e\n\z\p\d\p\w\4\z\7\3\j\j\g\g\b\u\r\c\j\v\m\z\8\g\i\x\b\x\n\o\x\e\x\z\s\i\9\7\k\j\x\2\o\2\z\v\w\5\8\k\b\m\6 ]] 01:05:10.455 11:02:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:05:10.455 11:02:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 01:05:10.455 [2024-07-22 11:02:15.588352] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:10.455 [2024-07-22 11:02:15.588434] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75924 ] 01:05:10.714 [2024-07-22 11:02:15.730302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:10.714 [2024-07-22 11:02:15.774298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:10.714 [2024-07-22 11:02:15.815745] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:10.972  Copying: 512/512 [B] (average 500 kBps) 01:05:10.972 01:05:10.972 11:02:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0v4g2fk9zw3d629xa3cxjvuc0mkuvoi09pyx27izyxa42jv9egrbnnxevsoz2av2zg66f113luu8oyjqi3dguyenadhxab1h53sl9hvsedcec73q6nqo9raxwb4234o9czr9zxpwzx4mpmwmyxin8ns0yueywaf6mnc6q521vd9v3li487smhnyi1ytyjdtdtnxvkx219fkmjul32udrq7ofc7hpp10kk9881p7ppm4p08168a77mc8ja8r1acq6lyaqb7bzfyx442qomt69lx2w1t2h45xqjbultcdxwf6wnaynp7ibzekw5skjizd2heynndx4j9t0x82reor6wo9uq6jz3gn89xmbx86w2zvvrh3decq9m3uxp3f30x8yit27l5wiyg3lnm1piokj0p0g1g506y18jq8xq5g6eotlf3m25rfzn45dcs65f9rhgku5nhfyhqenzpdpw4z73jjggburcjvmz8gixbxnoxexzsi97kjx2o2zvw58kbm6 == \0\v\4\g\2\f\k\9\z\w\3\d\6\2\9\x\a\3\c\x\j\v\u\c\0\m\k\u\v\o\i\0\9\p\y\x\2\7\i\z\y\x\a\4\2\j\v\9\e\g\r\b\n\n\x\e\v\s\o\z\2\a\v\2\z\g\6\6\f\1\1\3\l\u\u\8\o\y\j\q\i\3\d\g\u\y\e\n\a\d\h\x\a\b\1\h\5\3\s\l\9\h\v\s\e\d\c\e\c\7\3\q\6\n\q\o\9\r\a\x\w\b\4\2\3\4\o\9\c\z\r\9\z\x\p\w\z\x\4\m\p\m\w\m\y\x\i\n\8\n\s\0\y\u\e\y\w\a\f\6\m\n\c\6\q\5\2\1\v\d\9\v\3\l\i\4\8\7\s\m\h\n\y\i\1\y\t\y\j\d\t\d\t\n\x\v\k\x\2\1\9\f\k\m\j\u\l\3\2\u\d\r\q\7\o\f\c\7\h\p\p\1\0\k\k\9\8\8\1\p\7\p\p\m\4\p\0\8\1\6\8\a\7\7\m\c\8\j\a\8\r\1\a\c\q\6\l\y\a\q\b\7\b\z\f\y\x\4\4\2\q\o\m\t\6\9\l\x\2\w\1\t\2\h\4\5\x\q\j\b\u\l\t\c\d\x\w\f\6\w\n\a\y\n\p\7\i\b\z\e\k\w\5\s\k\j\i\z\d\2\h\e\y\n\n\d\x\4\j\9\t\0\x\8\2\r\e\o\r\6\w\o\9\u\q\6\j\z\3\g\n\8\9\x\m\b\x\8\6\w\2\z\v\v\r\h\3\d\e\c\q\9\m\3\u\x\p\3\f\3\0\x\8\y\i\t\2\7\l\5\w\i\y\g\3\l\n\m\1\p\i\o\k\j\0\p\0\g\1\g\5\0\6\y\1\8\j\q\8\x\q\5\g\6\e\o\t\l\f\3\m\2\5\r\f\z\n\4\5\d\c\s\6\5\f\9\r\h\g\k\u\5\n\h\f\y\h\q\e\n\z\p\d\p\w\4\z\7\3\j\j\g\g\b\u\r\c\j\v\m\z\8\g\i\x\b\x\n\o\x\e\x\z\s\i\9\7\k\j\x\2\o\2\z\v\w\5\8\k\b\m\6 ]] 01:05:10.972 01:05:10.972 real 0m3.796s 01:05:10.972 user 0m1.851s 01:05:10.972 sys 0m0.978s 01:05:10.972 11:02:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:10.972 ************************************ 01:05:10.972 END TEST dd_flags_misc_forced_aio 01:05:10.972 11:02:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 01:05:10.972 ************************************ 01:05:10.972 11:02:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 01:05:10.972 11:02:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 01:05:10.972 11:02:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 01:05:10.972 11:02:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 01:05:10.972 01:05:10.972 real 0m18.138s 01:05:10.972 user 0m7.773s 01:05:10.972 sys 0m5.960s 01:05:10.972 11:02:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:10.972 11:02:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 01:05:10.972 ************************************ 01:05:10.972 END TEST spdk_dd_posix 01:05:10.972 ************************************ 01:05:10.972 11:02:16 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 01:05:10.972 11:02:16 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 01:05:10.972 11:02:16 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:05:10.972 11:02:16 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:10.972 11:02:16 spdk_dd -- common/autotest_common.sh@10 -- # set +x 01:05:10.972 ************************************ 01:05:10.972 START TEST spdk_dd_malloc 01:05:10.972 ************************************ 01:05:10.972 11:02:16 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 01:05:11.231 * Looking for test storage... 01:05:11.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 01:05:11.231 11:02:16 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:05:11.231 11:02:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:05:11.231 11:02:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:05:11.231 11:02:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:05:11.231 11:02:16 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:11.231 11:02:16 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:11.231 11:02:16 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:11.231 11:02:16 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 01:05:11.231 11:02:16 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:11.231 11:02:16 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 01:05:11.231 11:02:16 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:05:11.231 11:02:16 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:11.231 11:02:16 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 01:05:11.231 ************************************ 01:05:11.231 START TEST dd_malloc_copy 01:05:11.231 ************************************ 01:05:11.231 11:02:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 01:05:11.231 11:02:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 01:05:11.231 11:02:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 01:05:11.231 11:02:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 01:05:11.231 11:02:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 01:05:11.231 11:02:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 01:05:11.231 11:02:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 01:05:11.231 11:02:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 01:05:11.231 11:02:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 01:05:11.231 11:02:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 01:05:11.231 11:02:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 01:05:11.231 [2024-07-22 11:02:16.361016] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:11.231 [2024-07-22 11:02:16.361081] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75998 ] 01:05:11.231 { 01:05:11.231 "subsystems": [ 01:05:11.231 { 01:05:11.231 "subsystem": "bdev", 01:05:11.231 "config": [ 01:05:11.231 { 01:05:11.231 "params": { 01:05:11.231 "block_size": 512, 01:05:11.231 "num_blocks": 1048576, 01:05:11.231 "name": "malloc0" 01:05:11.231 }, 01:05:11.231 "method": "bdev_malloc_create" 01:05:11.231 }, 01:05:11.231 { 01:05:11.231 "params": { 01:05:11.231 "block_size": 512, 01:05:11.231 "num_blocks": 1048576, 01:05:11.231 "name": "malloc1" 01:05:11.231 }, 01:05:11.231 "method": "bdev_malloc_create" 01:05:11.231 }, 01:05:11.231 { 01:05:11.231 "method": "bdev_wait_for_examine" 01:05:11.231 } 01:05:11.231 ] 01:05:11.231 } 01:05:11.231 ] 01:05:11.231 } 01:05:11.489 [2024-07-22 11:02:16.501338] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:11.489 [2024-07-22 11:02:16.545524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:11.489 [2024-07-22 11:02:16.587435] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:14.358  Copying: 252/512 [MB] (252 MBps) Copying: 510/512 [MB] (258 MBps) Copying: 512/512 [MB] (average 255 MBps) 01:05:14.358 01:05:14.358 11:02:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 01:05:14.358 11:02:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 01:05:14.358 11:02:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 01:05:14.358 11:02:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 01:05:14.358 [2024-07-22 11:02:19.374130] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:14.359 [2024-07-22 11:02:19.374208] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76035 ] 01:05:14.359 { 01:05:14.359 "subsystems": [ 01:05:14.359 { 01:05:14.359 "subsystem": "bdev", 01:05:14.359 "config": [ 01:05:14.359 { 01:05:14.359 "params": { 01:05:14.359 "block_size": 512, 01:05:14.359 "num_blocks": 1048576, 01:05:14.359 "name": "malloc0" 01:05:14.359 }, 01:05:14.359 "method": "bdev_malloc_create" 01:05:14.359 }, 01:05:14.359 { 01:05:14.359 "params": { 01:05:14.359 "block_size": 512, 01:05:14.359 "num_blocks": 1048576, 01:05:14.359 "name": "malloc1" 01:05:14.359 }, 01:05:14.359 "method": "bdev_malloc_create" 01:05:14.359 }, 01:05:14.359 { 01:05:14.359 "method": "bdev_wait_for_examine" 01:05:14.359 } 01:05:14.359 ] 01:05:14.359 } 01:05:14.359 ] 01:05:14.359 } 01:05:14.359 [2024-07-22 11:02:19.515360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:14.359 [2024-07-22 11:02:19.556832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:14.624 [2024-07-22 11:02:19.598394] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:17.204  Copying: 248/512 [MB] (248 MBps) Copying: 506/512 [MB] (258 MBps) Copying: 512/512 [MB] (average 253 MBps) 01:05:17.204 01:05:17.204 01:05:17.204 real 0m6.042s 01:05:17.204 user 0m5.212s 01:05:17.204 sys 0m0.690s 01:05:17.204 11:02:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:17.204 11:02:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 01:05:17.204 ************************************ 01:05:17.204 END TEST dd_malloc_copy 01:05:17.204 ************************************ 01:05:17.204 11:02:22 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1142 -- # return 0 01:05:17.204 01:05:17.204 real 0m6.250s 01:05:17.204 user 0m5.288s 01:05:17.204 sys 0m0.831s 01:05:17.204 11:02:22 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:17.204 11:02:22 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 01:05:17.204 ************************************ 01:05:17.204 END TEST spdk_dd_malloc 01:05:17.204 ************************************ 01:05:17.463 11:02:22 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 01:05:17.463 11:02:22 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 01:05:17.463 11:02:22 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 01:05:17.463 11:02:22 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:17.463 11:02:22 spdk_dd -- common/autotest_common.sh@10 -- # set +x 01:05:17.463 ************************************ 01:05:17.463 START TEST spdk_dd_bdev_to_bdev 01:05:17.463 ************************************ 01:05:17.463 11:02:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 01:05:17.463 * Looking for test storage... 01:05:17.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 01:05:17.463 11:02:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:05:17.463 11:02:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:05:17.463 11:02:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:05:17.463 11:02:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:05:17.463 11:02:22 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:17.463 11:02:22 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:17.463 11:02:22 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:17.463 11:02:22 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 01:05:17.463 11:02:22 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:17.463 11:02:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 01:05:17.463 11:02:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 01:05:17.463 11:02:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 01:05:17.463 11:02:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 01:05:17.463 11:02:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 01:05:17.463 11:02:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 01:05:17.463 11:02:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 01:05:17.463 11:02:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 01:05:17.463 11:02:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 01:05:17.463 11:02:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 01:05:17.463 11:02:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 01:05:17.463 11:02:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 01:05:17.463 11:02:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 01:05:17.463 11:02:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 01:05:17.463 11:02:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:05:17.463 11:02:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:05:17.463 11:02:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 01:05:17.463 11:02:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 01:05:17.463 11:02:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 01:05:17.463 11:02:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 01:05:17.463 11:02:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:17.463 11:02:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 01:05:17.463 ************************************ 01:05:17.463 START TEST dd_inflate_file 01:05:17.463 ************************************ 01:05:17.463 11:02:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 01:05:17.721 [2024-07-22 11:02:22.674125] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:17.721 [2024-07-22 11:02:22.674192] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76134 ] 01:05:17.721 [2024-07-22 11:02:22.813868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:17.721 [2024-07-22 11:02:22.855560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:17.721 [2024-07-22 11:02:22.896774] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:17.978  Copying: 64/64 [MB] (average 1361 MBps) 01:05:17.978 01:05:17.978 01:05:17.978 real 0m0.513s 01:05:17.978 user 0m0.277s 01:05:17.978 sys 0m0.289s 01:05:17.978 11:02:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:17.978 11:02:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 01:05:17.978 ************************************ 01:05:17.978 END TEST dd_inflate_file 01:05:17.978 ************************************ 01:05:18.235 11:02:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 01:05:18.235 11:02:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 01:05:18.235 11:02:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 01:05:18.235 11:02:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 01:05:18.235 11:02:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 01:05:18.235 11:02:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 01:05:18.235 11:02:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 01:05:18.235 11:02:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:18.235 11:02:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 01:05:18.235 11:02:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 01:05:18.235 ************************************ 01:05:18.235 START TEST dd_copy_to_out_bdev 01:05:18.235 ************************************ 01:05:18.235 11:02:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 01:05:18.235 { 01:05:18.235 "subsystems": [ 01:05:18.235 { 01:05:18.235 "subsystem": "bdev", 01:05:18.235 "config": [ 01:05:18.235 { 01:05:18.235 "params": { 01:05:18.235 "trtype": "pcie", 01:05:18.235 "traddr": "0000:00:10.0", 01:05:18.235 "name": "Nvme0" 01:05:18.235 }, 01:05:18.235 "method": "bdev_nvme_attach_controller" 01:05:18.235 }, 01:05:18.235 { 01:05:18.235 "params": { 01:05:18.235 "trtype": "pcie", 01:05:18.235 "traddr": "0000:00:11.0", 01:05:18.235 "name": "Nvme1" 01:05:18.235 }, 01:05:18.235 "method": "bdev_nvme_attach_controller" 01:05:18.235 }, 01:05:18.235 { 01:05:18.235 "method": "bdev_wait_for_examine" 01:05:18.235 } 01:05:18.236 ] 01:05:18.236 } 01:05:18.236 ] 01:05:18.236 } 01:05:18.236 [2024-07-22 11:02:23.269041] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:18.236 [2024-07-22 11:02:23.269106] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76173 ] 01:05:18.236 [2024-07-22 11:02:23.411464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:18.493 [2024-07-22 11:02:23.453318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:18.493 [2024-07-22 11:02:23.494959] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:19.879  Copying: 56/64 [MB] (56 MBps) Copying: 64/64 [MB] (average 56 MBps) 01:05:19.879 01:05:19.879 01:05:19.879 real 0m1.781s 01:05:19.879 user 0m1.573s 01:05:19.879 sys 0m1.452s 01:05:19.879 11:02:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:19.879 ************************************ 01:05:19.879 END TEST dd_copy_to_out_bdev 01:05:19.879 ************************************ 01:05:19.879 11:02:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 01:05:19.879 11:02:25 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 01:05:19.879 11:02:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 01:05:19.879 11:02:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 01:05:19.879 11:02:25 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:05:19.879 11:02:25 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:19.879 11:02:25 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 01:05:19.879 ************************************ 01:05:19.879 START TEST dd_offset_magic 01:05:19.879 ************************************ 01:05:19.879 11:02:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 01:05:19.879 11:02:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 01:05:19.879 11:02:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 01:05:19.880 11:02:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 01:05:19.880 11:02:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 01:05:19.880 11:02:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 01:05:19.880 11:02:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 01:05:19.880 11:02:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 01:05:19.880 11:02:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 01:05:20.137 [2024-07-22 11:02:25.115901] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:20.137 [2024-07-22 11:02:25.115968] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76218 ] 01:05:20.137 { 01:05:20.137 "subsystems": [ 01:05:20.137 { 01:05:20.137 "subsystem": "bdev", 01:05:20.137 "config": [ 01:05:20.137 { 01:05:20.137 "params": { 01:05:20.137 "trtype": "pcie", 01:05:20.137 "traddr": "0000:00:10.0", 01:05:20.137 "name": "Nvme0" 01:05:20.137 }, 01:05:20.137 "method": "bdev_nvme_attach_controller" 01:05:20.137 }, 01:05:20.137 { 01:05:20.137 "params": { 01:05:20.137 "trtype": "pcie", 01:05:20.137 "traddr": "0000:00:11.0", 01:05:20.137 "name": "Nvme1" 01:05:20.137 }, 01:05:20.137 "method": "bdev_nvme_attach_controller" 01:05:20.137 }, 01:05:20.137 { 01:05:20.137 "method": "bdev_wait_for_examine" 01:05:20.137 } 01:05:20.137 ] 01:05:20.137 } 01:05:20.137 ] 01:05:20.137 } 01:05:20.137 [2024-07-22 11:02:25.257185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:20.137 [2024-07-22 11:02:25.298730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:20.137 [2024-07-22 11:02:25.340347] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:20.655  Copying: 65/65 [MB] (average 625 MBps) 01:05:20.655 01:05:20.655 11:02:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 01:05:20.655 11:02:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 01:05:20.655 11:02:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 01:05:20.655 11:02:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 01:05:20.655 [2024-07-22 11:02:25.852048] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:20.656 [2024-07-22 11:02:25.852131] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76227 ] 01:05:20.914 { 01:05:20.914 "subsystems": [ 01:05:20.914 { 01:05:20.914 "subsystem": "bdev", 01:05:20.914 "config": [ 01:05:20.914 { 01:05:20.914 "params": { 01:05:20.914 "trtype": "pcie", 01:05:20.914 "traddr": "0000:00:10.0", 01:05:20.914 "name": "Nvme0" 01:05:20.914 }, 01:05:20.914 "method": "bdev_nvme_attach_controller" 01:05:20.914 }, 01:05:20.914 { 01:05:20.914 "params": { 01:05:20.914 "trtype": "pcie", 01:05:20.914 "traddr": "0000:00:11.0", 01:05:20.914 "name": "Nvme1" 01:05:20.914 }, 01:05:20.914 "method": "bdev_nvme_attach_controller" 01:05:20.914 }, 01:05:20.914 { 01:05:20.914 "method": "bdev_wait_for_examine" 01:05:20.914 } 01:05:20.914 ] 01:05:20.914 } 01:05:20.914 ] 01:05:20.914 } 01:05:20.914 [2024-07-22 11:02:25.998014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:20.914 [2024-07-22 11:02:26.043656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:20.914 [2024-07-22 11:02:26.085660] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:21.432  Copying: 1024/1024 [kB] (average 500 MBps) 01:05:21.432 01:05:21.432 11:02:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 01:05:21.432 11:02:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 01:05:21.432 11:02:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 01:05:21.432 11:02:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 01:05:21.432 11:02:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 01:05:21.432 11:02:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 01:05:21.432 11:02:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 01:05:21.432 [2024-07-22 11:02:26.454054] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:21.432 [2024-07-22 11:02:26.454118] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76249 ] 01:05:21.432 { 01:05:21.432 "subsystems": [ 01:05:21.432 { 01:05:21.432 "subsystem": "bdev", 01:05:21.432 "config": [ 01:05:21.432 { 01:05:21.432 "params": { 01:05:21.432 "trtype": "pcie", 01:05:21.432 "traddr": "0000:00:10.0", 01:05:21.432 "name": "Nvme0" 01:05:21.432 }, 01:05:21.432 "method": "bdev_nvme_attach_controller" 01:05:21.432 }, 01:05:21.432 { 01:05:21.432 "params": { 01:05:21.432 "trtype": "pcie", 01:05:21.432 "traddr": "0000:00:11.0", 01:05:21.432 "name": "Nvme1" 01:05:21.432 }, 01:05:21.432 "method": "bdev_nvme_attach_controller" 01:05:21.432 }, 01:05:21.432 { 01:05:21.432 "method": "bdev_wait_for_examine" 01:05:21.432 } 01:05:21.432 ] 01:05:21.432 } 01:05:21.432 ] 01:05:21.432 } 01:05:21.432 [2024-07-22 11:02:26.584671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:21.432 [2024-07-22 11:02:26.626685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:21.691 [2024-07-22 11:02:26.667956] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:21.950  Copying: 65/65 [MB] (average 706 MBps) 01:05:21.950 01:05:21.950 11:02:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 01:05:21.950 11:02:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 01:05:21.950 11:02:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 01:05:21.950 11:02:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 01:05:22.210 [2024-07-22 11:02:27.183971] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:22.210 [2024-07-22 11:02:27.184035] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76269 ] 01:05:22.210 { 01:05:22.210 "subsystems": [ 01:05:22.210 { 01:05:22.210 "subsystem": "bdev", 01:05:22.210 "config": [ 01:05:22.210 { 01:05:22.210 "params": { 01:05:22.210 "trtype": "pcie", 01:05:22.210 "traddr": "0000:00:10.0", 01:05:22.210 "name": "Nvme0" 01:05:22.210 }, 01:05:22.210 "method": "bdev_nvme_attach_controller" 01:05:22.210 }, 01:05:22.210 { 01:05:22.210 "params": { 01:05:22.210 "trtype": "pcie", 01:05:22.210 "traddr": "0000:00:11.0", 01:05:22.210 "name": "Nvme1" 01:05:22.210 }, 01:05:22.210 "method": "bdev_nvme_attach_controller" 01:05:22.210 }, 01:05:22.210 { 01:05:22.210 "method": "bdev_wait_for_examine" 01:05:22.210 } 01:05:22.210 ] 01:05:22.210 } 01:05:22.210 ] 01:05:22.210 } 01:05:22.210 [2024-07-22 11:02:27.326541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:22.210 [2024-07-22 11:02:27.367742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:22.210 [2024-07-22 11:02:27.409223] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:22.728  Copying: 1024/1024 [kB] (average 500 MBps) 01:05:22.728 01:05:22.728 11:02:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 01:05:22.728 11:02:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 01:05:22.728 01:05:22.728 real 0m2.676s 01:05:22.728 user 0m1.901s 01:05:22.728 sys 0m0.810s 01:05:22.729 11:02:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:22.729 ************************************ 01:05:22.729 END TEST dd_offset_magic 01:05:22.729 ************************************ 01:05:22.729 11:02:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 01:05:22.729 11:02:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 01:05:22.729 11:02:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 01:05:22.729 11:02:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 01:05:22.729 11:02:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 01:05:22.729 11:02:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 01:05:22.729 11:02:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 01:05:22.729 11:02:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 01:05:22.729 11:02:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 01:05:22.729 11:02:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 01:05:22.729 11:02:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 01:05:22.729 11:02:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 01:05:22.729 11:02:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 01:05:22.729 [2024-07-22 11:02:27.848444] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:22.729 [2024-07-22 11:02:27.848506] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76295 ] 01:05:22.729 { 01:05:22.729 "subsystems": [ 01:05:22.729 { 01:05:22.729 "subsystem": "bdev", 01:05:22.729 "config": [ 01:05:22.729 { 01:05:22.729 "params": { 01:05:22.729 "trtype": "pcie", 01:05:22.729 "traddr": "0000:00:10.0", 01:05:22.729 "name": "Nvme0" 01:05:22.729 }, 01:05:22.729 "method": "bdev_nvme_attach_controller" 01:05:22.729 }, 01:05:22.729 { 01:05:22.729 "params": { 01:05:22.729 "trtype": "pcie", 01:05:22.729 "traddr": "0000:00:11.0", 01:05:22.729 "name": "Nvme1" 01:05:22.729 }, 01:05:22.729 "method": "bdev_nvme_attach_controller" 01:05:22.729 }, 01:05:22.729 { 01:05:22.729 "method": "bdev_wait_for_examine" 01:05:22.729 } 01:05:22.729 ] 01:05:22.729 } 01:05:22.729 ] 01:05:22.729 } 01:05:22.987 [2024-07-22 11:02:27.983344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:22.987 [2024-07-22 11:02:28.026647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:22.987 [2024-07-22 11:02:28.070793] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:23.245  Copying: 5120/5120 [kB] (average 1000 MBps) 01:05:23.245 01:05:23.245 11:02:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 01:05:23.245 11:02:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 01:05:23.245 11:02:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 01:05:23.245 11:02:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 01:05:23.245 11:02:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 01:05:23.245 11:02:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 01:05:23.245 11:02:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 01:05:23.245 11:02:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 01:05:23.245 11:02:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 01:05:23.245 11:02:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 01:05:23.504 [2024-07-22 11:02:28.460521] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:23.505 [2024-07-22 11:02:28.460590] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76316 ] 01:05:23.505 { 01:05:23.505 "subsystems": [ 01:05:23.505 { 01:05:23.505 "subsystem": "bdev", 01:05:23.505 "config": [ 01:05:23.505 { 01:05:23.505 "params": { 01:05:23.505 "trtype": "pcie", 01:05:23.505 "traddr": "0000:00:10.0", 01:05:23.505 "name": "Nvme0" 01:05:23.505 }, 01:05:23.505 "method": "bdev_nvme_attach_controller" 01:05:23.505 }, 01:05:23.505 { 01:05:23.505 "params": { 01:05:23.505 "trtype": "pcie", 01:05:23.505 "traddr": "0000:00:11.0", 01:05:23.505 "name": "Nvme1" 01:05:23.505 }, 01:05:23.505 "method": "bdev_nvme_attach_controller" 01:05:23.505 }, 01:05:23.505 { 01:05:23.505 "method": "bdev_wait_for_examine" 01:05:23.505 } 01:05:23.505 ] 01:05:23.505 } 01:05:23.505 ] 01:05:23.505 } 01:05:23.505 [2024-07-22 11:02:28.600242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:23.505 [2024-07-22 11:02:28.643437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:23.505 [2024-07-22 11:02:28.685110] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:24.022  Copying: 5120/5120 [kB] (average 500 MBps) 01:05:24.023 01:05:24.023 11:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 01:05:24.023 ************************************ 01:05:24.023 END TEST spdk_dd_bdev_to_bdev 01:05:24.023 ************************************ 01:05:24.023 01:05:24.023 real 0m6.575s 01:05:24.023 user 0m4.713s 01:05:24.023 sys 0m3.269s 01:05:24.023 11:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:24.023 11:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 01:05:24.023 11:02:29 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 01:05:24.023 11:02:29 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 01:05:24.023 11:02:29 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 01:05:24.023 11:02:29 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:05:24.023 11:02:29 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:24.023 11:02:29 spdk_dd -- common/autotest_common.sh@10 -- # set +x 01:05:24.023 ************************************ 01:05:24.023 START TEST spdk_dd_uring 01:05:24.023 ************************************ 01:05:24.023 11:02:29 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 01:05:24.023 * Looking for test storage... 01:05:24.283 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 01:05:24.283 ************************************ 01:05:24.283 START TEST dd_uring_copy 01:05:24.283 ************************************ 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1123 -- # uring_zram_copy 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=7do2vfk7ajyzpha09dfepxz2ffbvvnp2cr82n5jtpdffm2ymkppmzhfo3qir6esw7oqtsoaez8n35oifudchcoeon1h2etbdzpc6n5qfrl6gd4hq5o2vd52ydcbp1jcdog3i0num5fctb0xs1au2lh7n3nqcy1h281rcevmdqw0vctutx54s5jt0jh20uzuicrs7pcyvm4fzqd2xvz0iq7pin5pbpvs5lrfk628b75yik41z1kl4yp5udt3z82mugfrloz0kzk3b94ms5f478nnvem9l93tc3surrcmxa6zod5ts5ooy6jylq5tatzw4spmz5u8kydboa4215wj8jc5k1nivt6nvvyvei9rii1dd3e6vc0zgdzy0uszev7vgistkwl86jy1z8r6s5uvihgv9wstaukaj49t0zwdwzukpljcwsbnch868elgn87l1s81eand2qh9psx9ash4vivr4t0996ozlwavgl8chn367kdmom48vkfaubwf7mq16lscnwok3xmv9qv9yrwjkle8p7pro9l1x7sosi0ivld96bgk6jyxo56y1a25mb4fyabvj6sz39oz96xj6lqkm9pvq7xjb7nedvw50i5ueie9xrlusap8dp7nc1apejzk3ndplyjaripadvpnnqofawzoeyjvk6t95hw9kk1o0y77jxp2d4cbpb2ww34gtiz1td3d1hj6mappubxvd5tw0rysgj3rqrwjocen6qvt78havtqt0lt9inmirrqtmchsm8ij2j7nnwaf7bo29qgs6bc98gjfpc7fi5zy87c6r7nk5v12houvju2b1eedzdqylijl2iaojcdc4pagy2zcq42qpyz97yimn5rth6on9acrbkdnh9c6fwfqplje6jzp3vmkbrcalfgt2mhasq4asg9w1ac9m015f451fw7y5xtwkudsv8ohaxyv0iznko1azjtx2azcpoktu67q0f3dbieyg7r1ukxm728kzd1t8s7dgopqwz8v5xuc6r772ig55 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 7do2vfk7ajyzpha09dfepxz2ffbvvnp2cr82n5jtpdffm2ymkppmzhfo3qir6esw7oqtsoaez8n35oifudchcoeon1h2etbdzpc6n5qfrl6gd4hq5o2vd52ydcbp1jcdog3i0num5fctb0xs1au2lh7n3nqcy1h281rcevmdqw0vctutx54s5jt0jh20uzuicrs7pcyvm4fzqd2xvz0iq7pin5pbpvs5lrfk628b75yik41z1kl4yp5udt3z82mugfrloz0kzk3b94ms5f478nnvem9l93tc3surrcmxa6zod5ts5ooy6jylq5tatzw4spmz5u8kydboa4215wj8jc5k1nivt6nvvyvei9rii1dd3e6vc0zgdzy0uszev7vgistkwl86jy1z8r6s5uvihgv9wstaukaj49t0zwdwzukpljcwsbnch868elgn87l1s81eand2qh9psx9ash4vivr4t0996ozlwavgl8chn367kdmom48vkfaubwf7mq16lscnwok3xmv9qv9yrwjkle8p7pro9l1x7sosi0ivld96bgk6jyxo56y1a25mb4fyabvj6sz39oz96xj6lqkm9pvq7xjb7nedvw50i5ueie9xrlusap8dp7nc1apejzk3ndplyjaripadvpnnqofawzoeyjvk6t95hw9kk1o0y77jxp2d4cbpb2ww34gtiz1td3d1hj6mappubxvd5tw0rysgj3rqrwjocen6qvt78havtqt0lt9inmirrqtmchsm8ij2j7nnwaf7bo29qgs6bc98gjfpc7fi5zy87c6r7nk5v12houvju2b1eedzdqylijl2iaojcdc4pagy2zcq42qpyz97yimn5rth6on9acrbkdnh9c6fwfqplje6jzp3vmkbrcalfgt2mhasq4asg9w1ac9m015f451fw7y5xtwkudsv8ohaxyv0iznko1azjtx2azcpoktu67q0f3dbieyg7r1ukxm728kzd1t8s7dgopqwz8v5xuc6r772ig55 01:05:24.283 11:02:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 01:05:24.283 [2024-07-22 11:02:29.330714] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:24.283 [2024-07-22 11:02:29.330770] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76386 ] 01:05:24.283 [2024-07-22 11:02:29.466745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:24.543 [2024-07-22 11:02:29.508363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:24.543 [2024-07-22 11:02:29.549010] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:25.370  Copying: 511/511 [MB] (average 1354 MBps) 01:05:25.370 01:05:25.370 11:02:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 01:05:25.370 11:02:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 01:05:25.370 11:02:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 01:05:25.370 11:02:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 01:05:25.370 [2024-07-22 11:02:30.472350] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:25.370 [2024-07-22 11:02:30.472412] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76402 ] 01:05:25.370 { 01:05:25.370 "subsystems": [ 01:05:25.370 { 01:05:25.370 "subsystem": "bdev", 01:05:25.370 "config": [ 01:05:25.370 { 01:05:25.370 "params": { 01:05:25.370 "block_size": 512, 01:05:25.370 "num_blocks": 1048576, 01:05:25.370 "name": "malloc0" 01:05:25.370 }, 01:05:25.370 "method": "bdev_malloc_create" 01:05:25.370 }, 01:05:25.370 { 01:05:25.370 "params": { 01:05:25.370 "filename": "/dev/zram1", 01:05:25.370 "name": "uring0" 01:05:25.370 }, 01:05:25.370 "method": "bdev_uring_create" 01:05:25.370 }, 01:05:25.370 { 01:05:25.370 "method": "bdev_wait_for_examine" 01:05:25.370 } 01:05:25.370 ] 01:05:25.370 } 01:05:25.371 ] 01:05:25.371 } 01:05:25.630 [2024-07-22 11:02:30.613309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:25.630 [2024-07-22 11:02:30.655633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:25.630 [2024-07-22 11:02:30.697192] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:28.179  Copying: 268/512 [MB] (268 MBps) Copying: 512/512 [MB] (average 266 MBps) 01:05:28.179 01:05:28.179 11:02:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 01:05:28.179 11:02:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 01:05:28.179 11:02:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 01:05:28.179 11:02:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 01:05:28.179 [2024-07-22 11:02:33.126125] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:28.179 [2024-07-22 11:02:33.126191] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76436 ] 01:05:28.179 { 01:05:28.179 "subsystems": [ 01:05:28.179 { 01:05:28.179 "subsystem": "bdev", 01:05:28.179 "config": [ 01:05:28.179 { 01:05:28.179 "params": { 01:05:28.179 "block_size": 512, 01:05:28.179 "num_blocks": 1048576, 01:05:28.179 "name": "malloc0" 01:05:28.179 }, 01:05:28.179 "method": "bdev_malloc_create" 01:05:28.179 }, 01:05:28.179 { 01:05:28.179 "params": { 01:05:28.179 "filename": "/dev/zram1", 01:05:28.179 "name": "uring0" 01:05:28.179 }, 01:05:28.179 "method": "bdev_uring_create" 01:05:28.179 }, 01:05:28.179 { 01:05:28.179 "method": "bdev_wait_for_examine" 01:05:28.179 } 01:05:28.179 ] 01:05:28.179 } 01:05:28.179 ] 01:05:28.179 } 01:05:28.179 [2024-07-22 11:02:33.269071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:28.179 [2024-07-22 11:02:33.312413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:28.179 [2024-07-22 11:02:33.354641] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:31.335  Copying: 220/512 [MB] (220 MBps) Copying: 424/512 [MB] (203 MBps) Copying: 512/512 [MB] (average 207 MBps) 01:05:31.335 01:05:31.335 11:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 01:05:31.335 11:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 7do2vfk7ajyzpha09dfepxz2ffbvvnp2cr82n5jtpdffm2ymkppmzhfo3qir6esw7oqtsoaez8n35oifudchcoeon1h2etbdzpc6n5qfrl6gd4hq5o2vd52ydcbp1jcdog3i0num5fctb0xs1au2lh7n3nqcy1h281rcevmdqw0vctutx54s5jt0jh20uzuicrs7pcyvm4fzqd2xvz0iq7pin5pbpvs5lrfk628b75yik41z1kl4yp5udt3z82mugfrloz0kzk3b94ms5f478nnvem9l93tc3surrcmxa6zod5ts5ooy6jylq5tatzw4spmz5u8kydboa4215wj8jc5k1nivt6nvvyvei9rii1dd3e6vc0zgdzy0uszev7vgistkwl86jy1z8r6s5uvihgv9wstaukaj49t0zwdwzukpljcwsbnch868elgn87l1s81eand2qh9psx9ash4vivr4t0996ozlwavgl8chn367kdmom48vkfaubwf7mq16lscnwok3xmv9qv9yrwjkle8p7pro9l1x7sosi0ivld96bgk6jyxo56y1a25mb4fyabvj6sz39oz96xj6lqkm9pvq7xjb7nedvw50i5ueie9xrlusap8dp7nc1apejzk3ndplyjaripadvpnnqofawzoeyjvk6t95hw9kk1o0y77jxp2d4cbpb2ww34gtiz1td3d1hj6mappubxvd5tw0rysgj3rqrwjocen6qvt78havtqt0lt9inmirrqtmchsm8ij2j7nnwaf7bo29qgs6bc98gjfpc7fi5zy87c6r7nk5v12houvju2b1eedzdqylijl2iaojcdc4pagy2zcq42qpyz97yimn5rth6on9acrbkdnh9c6fwfqplje6jzp3vmkbrcalfgt2mhasq4asg9w1ac9m015f451fw7y5xtwkudsv8ohaxyv0iznko1azjtx2azcpoktu67q0f3dbieyg7r1ukxm728kzd1t8s7dgopqwz8v5xuc6r772ig55 == \7\d\o\2\v\f\k\7\a\j\y\z\p\h\a\0\9\d\f\e\p\x\z\2\f\f\b\v\v\n\p\2\c\r\8\2\n\5\j\t\p\d\f\f\m\2\y\m\k\p\p\m\z\h\f\o\3\q\i\r\6\e\s\w\7\o\q\t\s\o\a\e\z\8\n\3\5\o\i\f\u\d\c\h\c\o\e\o\n\1\h\2\e\t\b\d\z\p\c\6\n\5\q\f\r\l\6\g\d\4\h\q\5\o\2\v\d\5\2\y\d\c\b\p\1\j\c\d\o\g\3\i\0\n\u\m\5\f\c\t\b\0\x\s\1\a\u\2\l\h\7\n\3\n\q\c\y\1\h\2\8\1\r\c\e\v\m\d\q\w\0\v\c\t\u\t\x\5\4\s\5\j\t\0\j\h\2\0\u\z\u\i\c\r\s\7\p\c\y\v\m\4\f\z\q\d\2\x\v\z\0\i\q\7\p\i\n\5\p\b\p\v\s\5\l\r\f\k\6\2\8\b\7\5\y\i\k\4\1\z\1\k\l\4\y\p\5\u\d\t\3\z\8\2\m\u\g\f\r\l\o\z\0\k\z\k\3\b\9\4\m\s\5\f\4\7\8\n\n\v\e\m\9\l\9\3\t\c\3\s\u\r\r\c\m\x\a\6\z\o\d\5\t\s\5\o\o\y\6\j\y\l\q\5\t\a\t\z\w\4\s\p\m\z\5\u\8\k\y\d\b\o\a\4\2\1\5\w\j\8\j\c\5\k\1\n\i\v\t\6\n\v\v\y\v\e\i\9\r\i\i\1\d\d\3\e\6\v\c\0\z\g\d\z\y\0\u\s\z\e\v\7\v\g\i\s\t\k\w\l\8\6\j\y\1\z\8\r\6\s\5\u\v\i\h\g\v\9\w\s\t\a\u\k\a\j\4\9\t\0\z\w\d\w\z\u\k\p\l\j\c\w\s\b\n\c\h\8\6\8\e\l\g\n\8\7\l\1\s\8\1\e\a\n\d\2\q\h\9\p\s\x\9\a\s\h\4\v\i\v\r\4\t\0\9\9\6\o\z\l\w\a\v\g\l\8\c\h\n\3\6\7\k\d\m\o\m\4\8\v\k\f\a\u\b\w\f\7\m\q\1\6\l\s\c\n\w\o\k\3\x\m\v\9\q\v\9\y\r\w\j\k\l\e\8\p\7\p\r\o\9\l\1\x\7\s\o\s\i\0\i\v\l\d\9\6\b\g\k\6\j\y\x\o\5\6\y\1\a\2\5\m\b\4\f\y\a\b\v\j\6\s\z\3\9\o\z\9\6\x\j\6\l\q\k\m\9\p\v\q\7\x\j\b\7\n\e\d\v\w\5\0\i\5\u\e\i\e\9\x\r\l\u\s\a\p\8\d\p\7\n\c\1\a\p\e\j\z\k\3\n\d\p\l\y\j\a\r\i\p\a\d\v\p\n\n\q\o\f\a\w\z\o\e\y\j\v\k\6\t\9\5\h\w\9\k\k\1\o\0\y\7\7\j\x\p\2\d\4\c\b\p\b\2\w\w\3\4\g\t\i\z\1\t\d\3\d\1\h\j\6\m\a\p\p\u\b\x\v\d\5\t\w\0\r\y\s\g\j\3\r\q\r\w\j\o\c\e\n\6\q\v\t\7\8\h\a\v\t\q\t\0\l\t\9\i\n\m\i\r\r\q\t\m\c\h\s\m\8\i\j\2\j\7\n\n\w\a\f\7\b\o\2\9\q\g\s\6\b\c\9\8\g\j\f\p\c\7\f\i\5\z\y\8\7\c\6\r\7\n\k\5\v\1\2\h\o\u\v\j\u\2\b\1\e\e\d\z\d\q\y\l\i\j\l\2\i\a\o\j\c\d\c\4\p\a\g\y\2\z\c\q\4\2\q\p\y\z\9\7\y\i\m\n\5\r\t\h\6\o\n\9\a\c\r\b\k\d\n\h\9\c\6\f\w\f\q\p\l\j\e\6\j\z\p\3\v\m\k\b\r\c\a\l\f\g\t\2\m\h\a\s\q\4\a\s\g\9\w\1\a\c\9\m\0\1\5\f\4\5\1\f\w\7\y\5\x\t\w\k\u\d\s\v\8\o\h\a\x\y\v\0\i\z\n\k\o\1\a\z\j\t\x\2\a\z\c\p\o\k\t\u\6\7\q\0\f\3\d\b\i\e\y\g\7\r\1\u\k\x\m\7\2\8\k\z\d\1\t\8\s\7\d\g\o\p\q\w\z\8\v\5\x\u\c\6\r\7\7\2\i\g\5\5 ]] 01:05:31.335 11:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 01:05:31.336 11:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 7do2vfk7ajyzpha09dfepxz2ffbvvnp2cr82n5jtpdffm2ymkppmzhfo3qir6esw7oqtsoaez8n35oifudchcoeon1h2etbdzpc6n5qfrl6gd4hq5o2vd52ydcbp1jcdog3i0num5fctb0xs1au2lh7n3nqcy1h281rcevmdqw0vctutx54s5jt0jh20uzuicrs7pcyvm4fzqd2xvz0iq7pin5pbpvs5lrfk628b75yik41z1kl4yp5udt3z82mugfrloz0kzk3b94ms5f478nnvem9l93tc3surrcmxa6zod5ts5ooy6jylq5tatzw4spmz5u8kydboa4215wj8jc5k1nivt6nvvyvei9rii1dd3e6vc0zgdzy0uszev7vgistkwl86jy1z8r6s5uvihgv9wstaukaj49t0zwdwzukpljcwsbnch868elgn87l1s81eand2qh9psx9ash4vivr4t0996ozlwavgl8chn367kdmom48vkfaubwf7mq16lscnwok3xmv9qv9yrwjkle8p7pro9l1x7sosi0ivld96bgk6jyxo56y1a25mb4fyabvj6sz39oz96xj6lqkm9pvq7xjb7nedvw50i5ueie9xrlusap8dp7nc1apejzk3ndplyjaripadvpnnqofawzoeyjvk6t95hw9kk1o0y77jxp2d4cbpb2ww34gtiz1td3d1hj6mappubxvd5tw0rysgj3rqrwjocen6qvt78havtqt0lt9inmirrqtmchsm8ij2j7nnwaf7bo29qgs6bc98gjfpc7fi5zy87c6r7nk5v12houvju2b1eedzdqylijl2iaojcdc4pagy2zcq42qpyz97yimn5rth6on9acrbkdnh9c6fwfqplje6jzp3vmkbrcalfgt2mhasq4asg9w1ac9m015f451fw7y5xtwkudsv8ohaxyv0iznko1azjtx2azcpoktu67q0f3dbieyg7r1ukxm728kzd1t8s7dgopqwz8v5xuc6r772ig55 == \7\d\o\2\v\f\k\7\a\j\y\z\p\h\a\0\9\d\f\e\p\x\z\2\f\f\b\v\v\n\p\2\c\r\8\2\n\5\j\t\p\d\f\f\m\2\y\m\k\p\p\m\z\h\f\o\3\q\i\r\6\e\s\w\7\o\q\t\s\o\a\e\z\8\n\3\5\o\i\f\u\d\c\h\c\o\e\o\n\1\h\2\e\t\b\d\z\p\c\6\n\5\q\f\r\l\6\g\d\4\h\q\5\o\2\v\d\5\2\y\d\c\b\p\1\j\c\d\o\g\3\i\0\n\u\m\5\f\c\t\b\0\x\s\1\a\u\2\l\h\7\n\3\n\q\c\y\1\h\2\8\1\r\c\e\v\m\d\q\w\0\v\c\t\u\t\x\5\4\s\5\j\t\0\j\h\2\0\u\z\u\i\c\r\s\7\p\c\y\v\m\4\f\z\q\d\2\x\v\z\0\i\q\7\p\i\n\5\p\b\p\v\s\5\l\r\f\k\6\2\8\b\7\5\y\i\k\4\1\z\1\k\l\4\y\p\5\u\d\t\3\z\8\2\m\u\g\f\r\l\o\z\0\k\z\k\3\b\9\4\m\s\5\f\4\7\8\n\n\v\e\m\9\l\9\3\t\c\3\s\u\r\r\c\m\x\a\6\z\o\d\5\t\s\5\o\o\y\6\j\y\l\q\5\t\a\t\z\w\4\s\p\m\z\5\u\8\k\y\d\b\o\a\4\2\1\5\w\j\8\j\c\5\k\1\n\i\v\t\6\n\v\v\y\v\e\i\9\r\i\i\1\d\d\3\e\6\v\c\0\z\g\d\z\y\0\u\s\z\e\v\7\v\g\i\s\t\k\w\l\8\6\j\y\1\z\8\r\6\s\5\u\v\i\h\g\v\9\w\s\t\a\u\k\a\j\4\9\t\0\z\w\d\w\z\u\k\p\l\j\c\w\s\b\n\c\h\8\6\8\e\l\g\n\8\7\l\1\s\8\1\e\a\n\d\2\q\h\9\p\s\x\9\a\s\h\4\v\i\v\r\4\t\0\9\9\6\o\z\l\w\a\v\g\l\8\c\h\n\3\6\7\k\d\m\o\m\4\8\v\k\f\a\u\b\w\f\7\m\q\1\6\l\s\c\n\w\o\k\3\x\m\v\9\q\v\9\y\r\w\j\k\l\e\8\p\7\p\r\o\9\l\1\x\7\s\o\s\i\0\i\v\l\d\9\6\b\g\k\6\j\y\x\o\5\6\y\1\a\2\5\m\b\4\f\y\a\b\v\j\6\s\z\3\9\o\z\9\6\x\j\6\l\q\k\m\9\p\v\q\7\x\j\b\7\n\e\d\v\w\5\0\i\5\u\e\i\e\9\x\r\l\u\s\a\p\8\d\p\7\n\c\1\a\p\e\j\z\k\3\n\d\p\l\y\j\a\r\i\p\a\d\v\p\n\n\q\o\f\a\w\z\o\e\y\j\v\k\6\t\9\5\h\w\9\k\k\1\o\0\y\7\7\j\x\p\2\d\4\c\b\p\b\2\w\w\3\4\g\t\i\z\1\t\d\3\d\1\h\j\6\m\a\p\p\u\b\x\v\d\5\t\w\0\r\y\s\g\j\3\r\q\r\w\j\o\c\e\n\6\q\v\t\7\8\h\a\v\t\q\t\0\l\t\9\i\n\m\i\r\r\q\t\m\c\h\s\m\8\i\j\2\j\7\n\n\w\a\f\7\b\o\2\9\q\g\s\6\b\c\9\8\g\j\f\p\c\7\f\i\5\z\y\8\7\c\6\r\7\n\k\5\v\1\2\h\o\u\v\j\u\2\b\1\e\e\d\z\d\q\y\l\i\j\l\2\i\a\o\j\c\d\c\4\p\a\g\y\2\z\c\q\4\2\q\p\y\z\9\7\y\i\m\n\5\r\t\h\6\o\n\9\a\c\r\b\k\d\n\h\9\c\6\f\w\f\q\p\l\j\e\6\j\z\p\3\v\m\k\b\r\c\a\l\f\g\t\2\m\h\a\s\q\4\a\s\g\9\w\1\a\c\9\m\0\1\5\f\4\5\1\f\w\7\y\5\x\t\w\k\u\d\s\v\8\o\h\a\x\y\v\0\i\z\n\k\o\1\a\z\j\t\x\2\a\z\c\p\o\k\t\u\6\7\q\0\f\3\d\b\i\e\y\g\7\r\1\u\k\x\m\7\2\8\k\z\d\1\t\8\s\7\d\g\o\p\q\w\z\8\v\5\x\u\c\6\r\7\7\2\i\g\5\5 ]] 01:05:31.336 11:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 01:05:31.593 11:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 01:05:31.593 11:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 01:05:31.593 11:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 01:05:31.593 11:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 01:05:31.593 [2024-07-22 11:02:36.715954] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:31.593 [2024-07-22 11:02:36.716404] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76507 ] 01:05:31.593 { 01:05:31.593 "subsystems": [ 01:05:31.593 { 01:05:31.593 "subsystem": "bdev", 01:05:31.593 "config": [ 01:05:31.593 { 01:05:31.593 "params": { 01:05:31.593 "block_size": 512, 01:05:31.593 "num_blocks": 1048576, 01:05:31.593 "name": "malloc0" 01:05:31.593 }, 01:05:31.593 "method": "bdev_malloc_create" 01:05:31.593 }, 01:05:31.593 { 01:05:31.593 "params": { 01:05:31.593 "filename": "/dev/zram1", 01:05:31.593 "name": "uring0" 01:05:31.593 }, 01:05:31.593 "method": "bdev_uring_create" 01:05:31.593 }, 01:05:31.593 { 01:05:31.593 "method": "bdev_wait_for_examine" 01:05:31.593 } 01:05:31.593 ] 01:05:31.593 } 01:05:31.593 ] 01:05:31.593 } 01:05:31.902 [2024-07-22 11:02:36.857486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:31.902 [2024-07-22 11:02:36.898505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:31.902 [2024-07-22 11:02:36.939753] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:34.735  Copying: 199/512 [MB] (199 MBps) Copying: 406/512 [MB] (206 MBps) Copying: 512/512 [MB] (average 202 MBps) 01:05:34.735 01:05:34.735 11:02:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 01:05:34.735 11:02:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 01:05:34.735 11:02:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 01:05:34.735 11:02:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 01:05:34.735 11:02:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 01:05:34.735 11:02:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 01:05:34.995 11:02:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 01:05:34.995 11:02:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 01:05:34.995 [2024-07-22 11:02:39.988022] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:34.995 [2024-07-22 11:02:39.988085] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76552 ] 01:05:34.995 { 01:05:34.995 "subsystems": [ 01:05:34.995 { 01:05:34.995 "subsystem": "bdev", 01:05:34.995 "config": [ 01:05:34.995 { 01:05:34.995 "params": { 01:05:34.995 "block_size": 512, 01:05:34.995 "num_blocks": 1048576, 01:05:34.995 "name": "malloc0" 01:05:34.995 }, 01:05:34.995 "method": "bdev_malloc_create" 01:05:34.995 }, 01:05:34.995 { 01:05:34.995 "params": { 01:05:34.995 "filename": "/dev/zram1", 01:05:34.995 "name": "uring0" 01:05:34.995 }, 01:05:34.995 "method": "bdev_uring_create" 01:05:34.995 }, 01:05:34.995 { 01:05:34.995 "params": { 01:05:34.995 "name": "uring0" 01:05:34.995 }, 01:05:34.995 "method": "bdev_uring_delete" 01:05:34.995 }, 01:05:34.995 { 01:05:34.995 "method": "bdev_wait_for_examine" 01:05:34.995 } 01:05:34.995 ] 01:05:34.995 } 01:05:34.995 ] 01:05:34.995 } 01:05:34.995 [2024-07-22 11:02:40.128484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:34.995 [2024-07-22 11:02:40.169349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:35.255 [2024-07-22 11:02:40.210758] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:35.515  Copying: 0/0 [B] (average 0 Bps) 01:05:35.515 01:05:35.515 11:02:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 01:05:35.515 11:02:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 01:05:35.515 11:02:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 01:05:35.515 11:02:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # local es=0 01:05:35.515 11:02:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 01:05:35.515 11:02:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 01:05:35.515 11:02:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 01:05:35.515 11:02:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:35.515 11:02:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:35.515 11:02:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:35.515 11:02:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:35.515 11:02:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:35.515 11:02:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:35.515 11:02:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:35.515 11:02:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:05:35.515 11:02:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 01:05:35.775 [2024-07-22 11:02:40.738413] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:35.775 [2024-07-22 11:02:40.738474] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76576 ] 01:05:35.775 { 01:05:35.775 "subsystems": [ 01:05:35.775 { 01:05:35.775 "subsystem": "bdev", 01:05:35.775 "config": [ 01:05:35.775 { 01:05:35.775 "params": { 01:05:35.775 "block_size": 512, 01:05:35.775 "num_blocks": 1048576, 01:05:35.775 "name": "malloc0" 01:05:35.775 }, 01:05:35.775 "method": "bdev_malloc_create" 01:05:35.775 }, 01:05:35.775 { 01:05:35.775 "params": { 01:05:35.775 "filename": "/dev/zram1", 01:05:35.775 "name": "uring0" 01:05:35.775 }, 01:05:35.775 "method": "bdev_uring_create" 01:05:35.775 }, 01:05:35.775 { 01:05:35.775 "params": { 01:05:35.775 "name": "uring0" 01:05:35.775 }, 01:05:35.775 "method": "bdev_uring_delete" 01:05:35.775 }, 01:05:35.775 { 01:05:35.775 "method": "bdev_wait_for_examine" 01:05:35.775 } 01:05:35.775 ] 01:05:35.775 } 01:05:35.775 ] 01:05:35.775 } 01:05:35.775 [2024-07-22 11:02:40.879003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:35.775 [2024-07-22 11:02:40.921383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:35.775 [2024-07-22 11:02:40.963218] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:36.034 [2024-07-22 11:02:41.121016] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 01:05:36.034 [2024-07-22 11:02:41.121056] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 01:05:36.034 [2024-07-22 11:02:41.121081] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 01:05:36.034 [2024-07-22 11:02:41.121090] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:05:36.292 [2024-07-22 11:02:41.368557] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 01:05:36.551 11:02:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # es=237 01:05:36.551 11:02:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:05:36.551 11:02:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # es=109 01:05:36.551 11:02:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # case "$es" in 01:05:36.551 11:02:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # es=1 01:05:36.551 11:02:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:05:36.551 11:02:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 01:05:36.551 11:02:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 01:05:36.551 11:02:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 01:05:36.551 11:02:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 01:05:36.551 11:02:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 01:05:36.551 11:02:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 01:05:36.809 01:05:36.809 real 0m12.563s 01:05:36.809 user 0m8.171s 01:05:36.809 sys 0m10.623s 01:05:36.809 11:02:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:36.809 11:02:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 01:05:36.809 ************************************ 01:05:36.809 END TEST dd_uring_copy 01:05:36.809 ************************************ 01:05:36.809 11:02:41 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1142 -- # return 0 01:05:36.809 01:05:36.809 real 0m12.752s 01:05:36.809 user 0m8.241s 01:05:36.809 sys 0m10.752s 01:05:36.809 11:02:41 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:36.809 11:02:41 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 01:05:36.809 ************************************ 01:05:36.809 END TEST spdk_dd_uring 01:05:36.809 ************************************ 01:05:36.809 11:02:41 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 01:05:36.809 11:02:41 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 01:05:36.809 11:02:41 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:05:36.809 11:02:41 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:36.809 11:02:41 spdk_dd -- common/autotest_common.sh@10 -- # set +x 01:05:36.809 ************************************ 01:05:36.809 START TEST spdk_dd_sparse 01:05:36.809 ************************************ 01:05:36.809 11:02:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 01:05:37.068 * Looking for test storage... 01:05:37.068 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 01:05:37.068 11:02:42 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:05:37.068 11:02:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:05:37.068 11:02:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:05:37.068 11:02:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:05:37.068 11:02:42 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:37.069 11:02:42 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:37.069 11:02:42 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:37.069 11:02:42 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 01:05:37.069 11:02:42 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:37.069 11:02:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 01:05:37.069 11:02:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 01:05:37.069 11:02:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 01:05:37.069 11:02:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 01:05:37.069 11:02:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 01:05:37.069 11:02:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 01:05:37.069 11:02:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 01:05:37.069 11:02:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 01:05:37.069 11:02:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 01:05:37.069 11:02:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 01:05:37.069 11:02:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 01:05:37.069 1+0 records in 01:05:37.069 1+0 records out 01:05:37.069 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00949644 s, 442 MB/s 01:05:37.069 11:02:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 01:05:37.069 1+0 records in 01:05:37.069 1+0 records out 01:05:37.069 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00933657 s, 449 MB/s 01:05:37.069 11:02:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 01:05:37.069 1+0 records in 01:05:37.069 1+0 records out 01:05:37.069 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00890139 s, 471 MB/s 01:05:37.069 11:02:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 01:05:37.069 11:02:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:05:37.069 11:02:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:37.069 11:02:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 01:05:37.069 ************************************ 01:05:37.069 START TEST dd_sparse_file_to_file 01:05:37.069 ************************************ 01:05:37.069 11:02:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 01:05:37.069 11:02:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 01:05:37.069 11:02:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 01:05:37.069 11:02:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 01:05:37.069 11:02:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 01:05:37.069 11:02:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 01:05:37.069 11:02:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 01:05:37.069 11:02:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 01:05:37.069 11:02:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 01:05:37.069 11:02:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 01:05:37.069 11:02:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 01:05:37.069 [2024-07-22 11:02:42.192041] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:37.069 [2024-07-22 11:02:42.192104] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76678 ] 01:05:37.069 { 01:05:37.069 "subsystems": [ 01:05:37.069 { 01:05:37.069 "subsystem": "bdev", 01:05:37.069 "config": [ 01:05:37.069 { 01:05:37.069 "params": { 01:05:37.069 "block_size": 4096, 01:05:37.069 "filename": "dd_sparse_aio_disk", 01:05:37.069 "name": "dd_aio" 01:05:37.069 }, 01:05:37.069 "method": "bdev_aio_create" 01:05:37.069 }, 01:05:37.069 { 01:05:37.069 "params": { 01:05:37.069 "lvs_name": "dd_lvstore", 01:05:37.069 "bdev_name": "dd_aio" 01:05:37.069 }, 01:05:37.069 "method": "bdev_lvol_create_lvstore" 01:05:37.069 }, 01:05:37.069 { 01:05:37.069 "method": "bdev_wait_for_examine" 01:05:37.069 } 01:05:37.069 ] 01:05:37.069 } 01:05:37.069 ] 01:05:37.069 } 01:05:37.327 [2024-07-22 11:02:42.333816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:37.327 [2024-07-22 11:02:42.374864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:37.327 [2024-07-22 11:02:42.415775] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:37.585  Copying: 12/36 [MB] (average 631 MBps) 01:05:37.585 01:05:37.585 11:02:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 01:05:37.585 11:02:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 01:05:37.585 11:02:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 01:05:37.585 11:02:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 01:05:37.585 11:02:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 01:05:37.585 11:02:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 01:05:37.585 11:02:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 01:05:37.585 11:02:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 01:05:37.585 11:02:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 01:05:37.585 11:02:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 01:05:37.585 01:05:37.585 real 0m0.578s 01:05:37.585 user 0m0.332s 01:05:37.585 sys 0m0.322s 01:05:37.585 11:02:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:37.585 ************************************ 01:05:37.585 END TEST dd_sparse_file_to_file 01:05:37.585 ************************************ 01:05:37.585 11:02:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 01:05:37.585 11:02:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 01:05:37.585 11:02:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 01:05:37.585 11:02:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:05:37.585 11:02:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:37.585 11:02:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 01:05:37.585 ************************************ 01:05:37.585 START TEST dd_sparse_file_to_bdev 01:05:37.585 ************************************ 01:05:37.585 11:02:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 01:05:37.585 11:02:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 01:05:37.585 11:02:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 01:05:37.585 11:02:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 01:05:37.585 11:02:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 01:05:37.585 11:02:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 01:05:37.585 11:02:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 01:05:37.585 11:02:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 01:05:37.585 11:02:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 01:05:37.844 [2024-07-22 11:02:42.834051] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:37.844 [2024-07-22 11:02:42.834112] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76721 ] 01:05:37.844 { 01:05:37.844 "subsystems": [ 01:05:37.844 { 01:05:37.844 "subsystem": "bdev", 01:05:37.844 "config": [ 01:05:37.844 { 01:05:37.844 "params": { 01:05:37.845 "block_size": 4096, 01:05:37.845 "filename": "dd_sparse_aio_disk", 01:05:37.845 "name": "dd_aio" 01:05:37.845 }, 01:05:37.845 "method": "bdev_aio_create" 01:05:37.845 }, 01:05:37.845 { 01:05:37.845 "params": { 01:05:37.845 "lvs_name": "dd_lvstore", 01:05:37.845 "lvol_name": "dd_lvol", 01:05:37.845 "size_in_mib": 36, 01:05:37.845 "thin_provision": true 01:05:37.845 }, 01:05:37.845 "method": "bdev_lvol_create" 01:05:37.845 }, 01:05:37.845 { 01:05:37.845 "method": "bdev_wait_for_examine" 01:05:37.845 } 01:05:37.845 ] 01:05:37.845 } 01:05:37.845 ] 01:05:37.845 } 01:05:37.845 [2024-07-22 11:02:42.973999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:37.845 [2024-07-22 11:02:43.015446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:38.104 [2024-07-22 11:02:43.056600] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:38.362  Copying: 12/36 [MB] (average 413 MBps) 01:05:38.362 01:05:38.362 01:05:38.362 real 0m0.540s 01:05:38.362 user 0m0.329s 01:05:38.362 sys 0m0.296s 01:05:38.362 11:02:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:38.362 11:02:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 01:05:38.362 ************************************ 01:05:38.362 END TEST dd_sparse_file_to_bdev 01:05:38.362 ************************************ 01:05:38.362 11:02:43 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 01:05:38.362 11:02:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 01:05:38.362 11:02:43 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:05:38.362 11:02:43 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:38.362 11:02:43 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 01:05:38.362 ************************************ 01:05:38.362 START TEST dd_sparse_bdev_to_file 01:05:38.362 ************************************ 01:05:38.362 11:02:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 01:05:38.362 11:02:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 01:05:38.362 11:02:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 01:05:38.362 11:02:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 01:05:38.362 11:02:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 01:05:38.362 11:02:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 01:05:38.362 11:02:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 01:05:38.362 11:02:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 01:05:38.362 11:02:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 01:05:38.362 [2024-07-22 11:02:43.450299] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:38.363 [2024-07-22 11:02:43.450672] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76753 ] 01:05:38.363 { 01:05:38.363 "subsystems": [ 01:05:38.363 { 01:05:38.363 "subsystem": "bdev", 01:05:38.363 "config": [ 01:05:38.363 { 01:05:38.363 "params": { 01:05:38.363 "block_size": 4096, 01:05:38.363 "filename": "dd_sparse_aio_disk", 01:05:38.363 "name": "dd_aio" 01:05:38.363 }, 01:05:38.363 "method": "bdev_aio_create" 01:05:38.363 }, 01:05:38.363 { 01:05:38.363 "method": "bdev_wait_for_examine" 01:05:38.363 } 01:05:38.363 ] 01:05:38.363 } 01:05:38.363 ] 01:05:38.363 } 01:05:38.622 [2024-07-22 11:02:43.591559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:38.622 [2024-07-22 11:02:43.632568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:38.622 [2024-07-22 11:02:43.673653] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:38.881  Copying: 12/36 [MB] (average 750 MBps) 01:05:38.881 01:05:38.881 11:02:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 01:05:38.881 11:02:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 01:05:38.881 11:02:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 01:05:38.881 11:02:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 01:05:38.881 11:02:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 01:05:38.881 11:02:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 01:05:38.881 11:02:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 01:05:38.881 11:02:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 01:05:38.881 11:02:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 01:05:38.881 11:02:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 01:05:38.881 01:05:38.881 real 0m0.564s 01:05:38.881 user 0m0.324s 01:05:38.881 sys 0m0.314s 01:05:38.881 11:02:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:38.881 11:02:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 01:05:38.881 ************************************ 01:05:38.881 END TEST dd_sparse_bdev_to_file 01:05:38.881 ************************************ 01:05:38.881 11:02:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 01:05:38.881 11:02:44 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 01:05:38.881 11:02:44 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 01:05:38.881 11:02:44 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 01:05:38.881 11:02:44 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 01:05:38.881 11:02:44 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 01:05:38.881 01:05:38.881 real 0m2.110s 01:05:38.881 user 0m1.124s 01:05:38.881 sys 0m1.227s 01:05:38.881 11:02:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:38.881 11:02:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 01:05:38.881 ************************************ 01:05:38.881 END TEST spdk_dd_sparse 01:05:38.881 ************************************ 01:05:39.141 11:02:44 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 01:05:39.141 11:02:44 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 01:05:39.141 11:02:44 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:05:39.141 11:02:44 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:39.141 11:02:44 spdk_dd -- common/autotest_common.sh@10 -- # set +x 01:05:39.141 ************************************ 01:05:39.141 START TEST spdk_dd_negative 01:05:39.141 ************************************ 01:05:39.141 11:02:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 01:05:39.141 * Looking for test storage... 01:05:39.141 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 01:05:39.141 11:02:44 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:05:39.141 11:02:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:05:39.141 11:02:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:05:39.141 11:02:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:05:39.141 11:02:44 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:39.141 11:02:44 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:39.141 11:02:44 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:39.141 11:02:44 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 01:05:39.142 11:02:44 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:39.142 11:02:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:05:39.142 11:02:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:05:39.142 11:02:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:05:39.142 11:02:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:05:39.142 11:02:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 01:05:39.142 11:02:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:05:39.142 11:02:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:39.142 11:02:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:05:39.142 ************************************ 01:05:39.142 START TEST dd_invalid_arguments 01:05:39.142 ************************************ 01:05:39.142 11:02:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 01:05:39.142 11:02:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 01:05:39.142 11:02:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 01:05:39.142 11:02:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 01:05:39.142 11:02:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:39.142 11:02:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:39.142 11:02:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:39.142 11:02:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:39.142 11:02:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:39.142 11:02:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:39.142 11:02:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:39.142 11:02:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:05:39.142 11:02:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 01:05:39.142 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 01:05:39.142 01:05:39.142 CPU options: 01:05:39.142 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 01:05:39.142 (like [0,1,10]) 01:05:39.142 --lcores lcore to CPU mapping list. The list is in the format: 01:05:39.142 [<,lcores[@CPUs]>...] 01:05:39.142 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 01:05:39.142 Within the group, '-' is used for range separator, 01:05:39.142 ',' is used for single number separator. 01:05:39.142 '( )' can be omitted for single element group, 01:05:39.142 '@' can be omitted if cpus and lcores have the same value 01:05:39.142 --disable-cpumask-locks Disable CPU core lock files. 01:05:39.142 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 01:05:39.142 pollers in the app support interrupt mode) 01:05:39.142 -p, --main-core main (primary) core for DPDK 01:05:39.142 01:05:39.142 Configuration options: 01:05:39.142 -c, --config, --json JSON config file 01:05:39.142 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 01:05:39.142 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 01:05:39.142 --wait-for-rpc wait for RPCs to initialize subsystems 01:05:39.142 --rpcs-allowed comma-separated list of permitted RPCS 01:05:39.142 --json-ignore-init-errors don't exit on invalid config entry 01:05:39.142 01:05:39.142 Memory options: 01:05:39.142 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 01:05:39.142 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 01:05:39.142 --huge-dir use a specific hugetlbfs mount to reserve memory from 01:05:39.142 -R, --huge-unlink unlink huge files after initialization 01:05:39.142 -n, --mem-channels number of memory channels used for DPDK 01:05:39.142 -s, --mem-size memory size in MB for DPDK (default: 0MB) 01:05:39.142 --msg-mempool-size global message memory pool size in count (default: 262143) 01:05:39.142 --no-huge run without using hugepages 01:05:39.142 -i, --shm-id shared memory ID (optional) 01:05:39.142 -g, --single-file-segments force creating just one hugetlbfs file 01:05:39.142 01:05:39.142 PCI options: 01:05:39.142 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 01:05:39.142 -B, --pci-blocked pci addr to block (can be used more than once) 01:05:39.142 -u, --no-pci disable PCI access 01:05:39.142 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 01:05:39.142 01:05:39.142 Log options: 01:05:39.142 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 01:05:39.142 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 01:05:39.142 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 01:05:39.142 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 01:05:39.142 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 01:05:39.142 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 01:05:39.142 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 01:05:39.142 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 01:05:39.142 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 01:05:39.142 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 01:05:39.142 virtio_vfio_user, vmd) 01:05:39.142 --silence-noticelog disable notice level logging to stderr 01:05:39.142 01:05:39.142 Trace options: 01:05:39.142 --num-trace-entries number of trace entries for each core, must be power of 2, 01:05:39.142 setting 0 to disable trace (default 32768) 01:05:39.142 Tracepoints vary in size and can use more than one trace entry. 01:05:39.142 -e, --tpoint-group [:] 01:05:39.142 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 01:05:39.142 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 01:05:39.142 [2024-07-22 11:02:44.324246] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 01:05:39.142 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 01:05:39.142 tpoint_mask - tracepoint mask for enabling individual tpoints inside 01:05:39.142 a tracepoint group. First tpoint inside a group can be enabled by 01:05:39.142 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 01:05:39.142 combined (e.g. thread,bdev:0x1). All available tpoints can be found 01:05:39.142 in /include/spdk_internal/trace_defs.h 01:05:39.142 01:05:39.142 Other options: 01:05:39.142 -h, --help show this usage 01:05:39.142 -v, --version print SPDK version 01:05:39.142 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 01:05:39.142 --env-context Opaque context for use of the env implementation 01:05:39.142 01:05:39.142 Application specific: 01:05:39.142 [--------- DD Options ---------] 01:05:39.142 --if Input file. Must specify either --if or --ib. 01:05:39.142 --ib Input bdev. Must specifier either --if or --ib 01:05:39.142 --of Output file. Must specify either --of or --ob. 01:05:39.142 --ob Output bdev. Must specify either --of or --ob. 01:05:39.142 --iflag Input file flags. 01:05:39.142 --oflag Output file flags. 01:05:39.142 --bs I/O unit size (default: 4096) 01:05:39.142 --qd Queue depth (default: 2) 01:05:39.142 --count I/O unit count. The number of I/O units to copy. (default: all) 01:05:39.142 --skip Skip this many I/O units at start of input. (default: 0) 01:05:39.142 --seek Skip this many I/O units at start of output. (default: 0) 01:05:39.142 --aio Force usage of AIO. (by default io_uring is used if available) 01:05:39.142 --sparse Enable hole skipping in input target 01:05:39.142 Available iflag and oflag values: 01:05:39.142 append - append mode 01:05:39.142 direct - use direct I/O for data 01:05:39.142 directory - fail unless a directory 01:05:39.142 dsync - use synchronized I/O for data 01:05:39.142 noatime - do not update access time 01:05:39.142 noctty - do not assign controlling terminal from file 01:05:39.142 nofollow - do not follow symlinks 01:05:39.142 nonblock - use non-blocking I/O 01:05:39.142 sync - use synchronized I/O for data and metadata 01:05:39.142 11:02:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 01:05:39.142 11:02:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:05:39.142 11:02:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:05:39.142 11:02:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:05:39.142 01:05:39.142 real 0m0.066s 01:05:39.142 user 0m0.038s 01:05:39.142 sys 0m0.027s 01:05:39.142 11:02:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:39.142 11:02:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 01:05:39.142 ************************************ 01:05:39.142 END TEST dd_invalid_arguments 01:05:39.142 ************************************ 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:05:39.402 ************************************ 01:05:39.402 START TEST dd_double_input 01:05:39.402 ************************************ 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1123 -- # double_input 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 01:05:39.402 [2024-07-22 11:02:44.475434] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:05:39.402 01:05:39.402 real 0m0.073s 01:05:39.402 user 0m0.035s 01:05:39.402 sys 0m0.038s 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 01:05:39.402 ************************************ 01:05:39.402 END TEST dd_double_input 01:05:39.402 ************************************ 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:05:39.402 ************************************ 01:05:39.402 START TEST dd_double_output 01:05:39.402 ************************************ 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:05:39.402 11:02:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 01:05:39.661 [2024-07-22 11:02:44.622479] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 01:05:39.661 11:02:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 01:05:39.661 11:02:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:05:39.661 11:02:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:05:39.661 11:02:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:05:39.661 01:05:39.661 real 0m0.071s 01:05:39.661 user 0m0.043s 01:05:39.661 sys 0m0.028s 01:05:39.661 11:02:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:39.661 11:02:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 01:05:39.661 ************************************ 01:05:39.661 END TEST dd_double_output 01:05:39.661 ************************************ 01:05:39.661 11:02:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 01:05:39.661 11:02:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 01:05:39.661 11:02:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:05:39.661 11:02:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:39.661 11:02:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:05:39.661 ************************************ 01:05:39.661 START TEST dd_no_input 01:05:39.661 ************************************ 01:05:39.661 11:02:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 01:05:39.661 11:02:44 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 01:05:39.661 11:02:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 01:05:39.661 11:02:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 01:05:39.661 11:02:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:39.661 11:02:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:39.662 11:02:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:39.662 11:02:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:39.662 11:02:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:39.662 11:02:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:39.662 11:02:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:39.662 11:02:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:05:39.662 11:02:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 01:05:39.662 [2024-07-22 11:02:44.759297] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 01:05:39.662 11:02:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 01:05:39.662 11:02:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:05:39.662 11:02:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:05:39.662 11:02:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:05:39.662 01:05:39.662 real 0m0.069s 01:05:39.662 user 0m0.033s 01:05:39.662 sys 0m0.036s 01:05:39.662 11:02:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:39.662 11:02:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 01:05:39.662 ************************************ 01:05:39.662 END TEST dd_no_input 01:05:39.662 ************************************ 01:05:39.662 11:02:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 01:05:39.662 11:02:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 01:05:39.662 11:02:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:05:39.662 11:02:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:39.662 11:02:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:05:39.662 ************************************ 01:05:39.662 START TEST dd_no_output 01:05:39.662 ************************************ 01:05:39.662 11:02:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 01:05:39.662 11:02:44 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:05:39.662 11:02:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 01:05:39.662 11:02:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:05:39.662 11:02:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:39.662 11:02:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:39.662 11:02:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:39.662 11:02:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:39.662 11:02:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:39.662 11:02:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:39.662 11:02:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:39.662 11:02:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:05:39.662 11:02:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:05:39.921 [2024-07-22 11:02:44.902396] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 01:05:39.921 11:02:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 01:05:39.921 11:02:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:05:39.921 11:02:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:05:39.921 11:02:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:05:39.921 01:05:39.921 real 0m0.071s 01:05:39.921 user 0m0.034s 01:05:39.921 sys 0m0.037s 01:05:39.921 11:02:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:39.921 11:02:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 01:05:39.921 ************************************ 01:05:39.921 END TEST dd_no_output 01:05:39.921 ************************************ 01:05:39.921 11:02:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 01:05:39.921 11:02:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 01:05:39.921 11:02:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:05:39.921 11:02:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:39.921 11:02:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:05:39.921 ************************************ 01:05:39.921 START TEST dd_wrong_blocksize 01:05:39.921 ************************************ 01:05:39.921 11:02:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 01:05:39.921 11:02:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 01:05:39.921 11:02:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 01:05:39.921 11:02:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 01:05:39.921 11:02:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:39.921 11:02:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:39.921 11:02:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:39.922 11:02:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:39.922 11:02:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:39.922 11:02:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:39.922 11:02:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:39.922 11:02:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:05:39.922 11:02:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 01:05:39.922 [2024-07-22 11:02:45.044947] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 01:05:39.922 11:02:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 01:05:39.922 11:02:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:05:39.922 11:02:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:05:39.922 11:02:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:05:39.922 01:05:39.922 real 0m0.070s 01:05:39.922 user 0m0.041s 01:05:39.922 sys 0m0.029s 01:05:39.922 11:02:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:39.922 11:02:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 01:05:39.922 ************************************ 01:05:39.922 END TEST dd_wrong_blocksize 01:05:39.922 ************************************ 01:05:39.922 11:02:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 01:05:39.922 11:02:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 01:05:39.922 11:02:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:05:39.922 11:02:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:39.922 11:02:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:05:40.181 ************************************ 01:05:40.181 START TEST dd_smaller_blocksize 01:05:40.181 ************************************ 01:05:40.181 11:02:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 01:05:40.181 11:02:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 01:05:40.181 11:02:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 01:05:40.181 11:02:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 01:05:40.182 11:02:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:40.182 11:02:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:40.182 11:02:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:40.182 11:02:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:40.182 11:02:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:40.182 11:02:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:40.182 11:02:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:40.182 11:02:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:05:40.182 11:02:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 01:05:40.182 [2024-07-22 11:02:45.187412] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:40.182 [2024-07-22 11:02:45.187475] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76976 ] 01:05:40.182 [2024-07-22 11:02:45.329246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:40.182 [2024-07-22 11:02:45.370685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:40.441 [2024-07-22 11:02:45.411217] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:40.441 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 01:05:40.441 [2024-07-22 11:02:45.431644] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 01:05:40.441 [2024-07-22 11:02:45.431668] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:05:40.441 [2024-07-22 11:02:45.519584] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 01:05:40.441 11:02:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 01:05:40.441 11:02:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:05:40.441 11:02:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 01:05:40.441 11:02:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 01:05:40.441 11:02:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 01:05:40.441 11:02:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:05:40.441 01:05:40.441 real 0m0.468s 01:05:40.441 user 0m0.239s 01:05:40.441 sys 0m0.125s 01:05:40.441 ************************************ 01:05:40.441 END TEST dd_smaller_blocksize 01:05:40.441 ************************************ 01:05:40.441 11:02:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:40.441 11:02:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 01:05:40.700 11:02:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 01:05:40.700 11:02:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 01:05:40.700 11:02:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:05:40.700 11:02:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:40.700 11:02:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:05:40.700 ************************************ 01:05:40.700 START TEST dd_invalid_count 01:05:40.700 ************************************ 01:05:40.700 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 01:05:40.700 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 01:05:40.700 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 01:05:40.700 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 01:05:40.700 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:40.700 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:40.700 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:40.700 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:40.700 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:40.700 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:40.700 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:40.700 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:05:40.700 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 01:05:40.700 [2024-07-22 11:02:45.716685] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 01:05:40.700 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 01:05:40.700 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:05:40.700 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:05:40.700 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:05:40.700 01:05:40.700 real 0m0.068s 01:05:40.700 user 0m0.034s 01:05:40.700 sys 0m0.032s 01:05:40.700 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:40.700 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 01:05:40.700 ************************************ 01:05:40.700 END TEST dd_invalid_count 01:05:40.700 ************************************ 01:05:40.700 11:02:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 01:05:40.701 11:02:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 01:05:40.701 11:02:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:05:40.701 11:02:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:40.701 11:02:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:05:40.701 ************************************ 01:05:40.701 START TEST dd_invalid_oflag 01:05:40.701 ************************************ 01:05:40.701 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 01:05:40.701 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 01:05:40.701 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 01:05:40.701 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 01:05:40.701 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:40.701 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:40.701 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:40.701 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:40.701 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:40.701 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:40.701 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:40.701 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:05:40.701 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 01:05:40.701 [2024-07-22 11:02:45.857642] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 01:05:40.701 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 01:05:40.701 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:05:40.701 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:05:40.701 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:05:40.701 01:05:40.701 real 0m0.071s 01:05:40.701 user 0m0.038s 01:05:40.701 sys 0m0.031s 01:05:40.701 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:40.701 ************************************ 01:05:40.701 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 01:05:40.701 END TEST dd_invalid_oflag 01:05:40.701 ************************************ 01:05:40.960 11:02:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 01:05:40.960 11:02:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 01:05:40.960 11:02:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:05:40.960 11:02:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:40.960 11:02:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:05:40.960 ************************************ 01:05:40.960 START TEST dd_invalid_iflag 01:05:40.960 ************************************ 01:05:40.960 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 01:05:40.960 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 01:05:40.960 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 01:05:40.960 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 01:05:40.960 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:40.960 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:40.960 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:40.960 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:40.960 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:40.960 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:40.960 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:40.960 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:05:40.960 11:02:45 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 01:05:40.960 [2024-07-22 11:02:45.997856] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 01:05:40.960 ************************************ 01:05:40.960 END TEST dd_invalid_iflag 01:05:40.960 ************************************ 01:05:40.960 11:02:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 01:05:40.960 11:02:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:05:40.960 11:02:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:05:40.960 11:02:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:05:40.960 01:05:40.960 real 0m0.073s 01:05:40.960 user 0m0.041s 01:05:40.960 sys 0m0.030s 01:05:40.960 11:02:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:40.960 11:02:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 01:05:40.960 11:02:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 01:05:40.960 11:02:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 01:05:40.960 11:02:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:05:40.960 11:02:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:40.960 11:02:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:05:40.960 ************************************ 01:05:40.960 START TEST dd_unknown_flag 01:05:40.960 ************************************ 01:05:40.960 11:02:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 01:05:40.960 11:02:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 01:05:40.960 11:02:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 01:05:40.960 11:02:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 01:05:40.960 11:02:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:40.960 11:02:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:40.960 11:02:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:40.960 11:02:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:40.960 11:02:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:40.960 11:02:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:40.960 11:02:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:40.960 11:02:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:05:40.960 11:02:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 01:05:40.960 [2024-07-22 11:02:46.142764] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:40.960 [2024-07-22 11:02:46.142827] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77064 ] 01:05:41.219 [2024-07-22 11:02:46.283479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:41.219 [2024-07-22 11:02:46.324346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:41.219 [2024-07-22 11:02:46.364791] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:41.219 [2024-07-22 11:02:46.384919] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 01:05:41.219 [2024-07-22 11:02:46.384962] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:05:41.219 [2024-07-22 11:02:46.385008] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 01:05:41.219 [2024-07-22 11:02:46.385018] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:05:41.219 [2024-07-22 11:02:46.385210] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 01:05:41.219 [2024-07-22 11:02:46.385223] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:05:41.219 [2024-07-22 11:02:46.385268] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 01:05:41.219 [2024-07-22 11:02:46.385275] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 01:05:41.476 [2024-07-22 11:02:46.473011] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 01:05:41.476 ************************************ 01:05:41.476 END TEST dd_unknown_flag 01:05:41.476 ************************************ 01:05:41.476 11:02:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 01:05:41.476 11:02:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:05:41.476 11:02:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 01:05:41.476 11:02:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 01:05:41.476 11:02:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 01:05:41.476 11:02:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:05:41.476 01:05:41.476 real 0m0.469s 01:05:41.476 user 0m0.230s 01:05:41.476 sys 0m0.142s 01:05:41.476 11:02:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:41.476 11:02:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 01:05:41.476 11:02:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 01:05:41.476 11:02:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 01:05:41.476 11:02:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:05:41.476 11:02:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:41.476 11:02:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:05:41.476 ************************************ 01:05:41.476 START TEST dd_invalid_json 01:05:41.476 ************************************ 01:05:41.476 11:02:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 01:05:41.476 11:02:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 01:05:41.476 11:02:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 01:05:41.476 11:02:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 01:05:41.476 11:02:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 01:05:41.476 11:02:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:41.476 11:02:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:41.476 11:02:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:41.476 11:02:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:41.476 11:02:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:41.476 11:02:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:05:41.476 11:02:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:05:41.476 11:02:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:05:41.476 11:02:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 01:05:41.734 [2024-07-22 11:02:46.688620] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:41.734 [2024-07-22 11:02:46.688685] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77098 ] 01:05:41.734 [2024-07-22 11:02:46.829896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:41.734 [2024-07-22 11:02:46.871262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:41.734 [2024-07-22 11:02:46.871326] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 01:05:41.734 [2024-07-22 11:02:46.871339] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 01:05:41.734 [2024-07-22 11:02:46.871348] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:05:41.734 [2024-07-22 11:02:46.871378] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 01:05:41.993 11:02:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 01:05:41.993 11:02:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:05:41.993 11:02:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 01:05:41.993 11:02:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 01:05:41.993 11:02:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 01:05:41.993 ************************************ 01:05:41.993 END TEST dd_invalid_json 01:05:41.993 ************************************ 01:05:41.993 11:02:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:05:41.993 01:05:41.993 real 0m0.318s 01:05:41.993 user 0m0.147s 01:05:41.993 sys 0m0.070s 01:05:41.993 11:02:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:41.993 11:02:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 01:05:41.993 11:02:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 01:05:41.993 01:05:41.993 real 0m2.888s 01:05:41.993 user 0m1.298s 01:05:41.993 sys 0m1.261s 01:05:41.993 11:02:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:41.993 11:02:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:05:41.993 ************************************ 01:05:41.993 END TEST spdk_dd_negative 01:05:41.993 ************************************ 01:05:41.993 11:02:47 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 01:05:41.993 ************************************ 01:05:41.993 END TEST spdk_dd 01:05:41.993 ************************************ 01:05:41.993 01:05:41.993 real 1m5.396s 01:05:41.993 user 0m39.284s 01:05:41.994 sys 0m30.217s 01:05:41.994 11:02:47 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:41.994 11:02:47 spdk_dd -- common/autotest_common.sh@10 -- # set +x 01:05:41.994 11:02:47 -- common/autotest_common.sh@1142 -- # return 0 01:05:41.994 11:02:47 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 01:05:41.994 11:02:47 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 01:05:41.994 11:02:47 -- spdk/autotest.sh@260 -- # timing_exit lib 01:05:41.994 11:02:47 -- common/autotest_common.sh@728 -- # xtrace_disable 01:05:41.994 11:02:47 -- common/autotest_common.sh@10 -- # set +x 01:05:41.994 11:02:47 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 01:05:41.994 11:02:47 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 01:05:41.994 11:02:47 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 01:05:41.994 11:02:47 -- spdk/autotest.sh@280 -- # export NET_TYPE 01:05:41.994 11:02:47 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 01:05:41.994 11:02:47 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 01:05:41.994 11:02:47 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 01:05:41.994 11:02:47 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:05:41.994 11:02:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:41.994 11:02:47 -- common/autotest_common.sh@10 -- # set +x 01:05:41.994 ************************************ 01:05:41.994 START TEST nvmf_tcp 01:05:41.994 ************************************ 01:05:41.994 11:02:47 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 01:05:42.267 * Looking for test storage... 01:05:42.267 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:05:42.267 11:02:47 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:05:42.267 11:02:47 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:05:42.267 11:02:47 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:05:42.267 11:02:47 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:42.267 11:02:47 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:42.267 11:02:47 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:42.267 11:02:47 nvmf_tcp -- paths/export.sh@5 -- # export PATH 01:05:42.267 11:02:47 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 01:05:42.267 11:02:47 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 01:05:42.267 11:02:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 01:05:42.267 11:02:47 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 01:05:42.267 11:02:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:05:42.267 11:02:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:42.267 11:02:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:05:42.267 ************************************ 01:05:42.267 START TEST nvmf_host_management 01:05:42.267 ************************************ 01:05:42.267 11:02:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 01:05:42.527 * Looking for test storage... 01:05:42.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:05:42.527 11:02:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:05:42.527 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 01:05:42.527 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:05:42.527 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:05:42.527 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:05:42.527 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:05:42.527 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:05:42.527 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:05:42.527 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:05:42.527 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:05:42.527 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:05:42.527 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:05:42.527 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:05:42.527 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:05:42.527 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:05:42.527 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:05:42.527 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:05:42.527 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:05:42.527 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:05:42.527 11:02:47 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:05:42.527 11:02:47 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:05:42.527 11:02:47 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:05:42.527 11:02:47 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:42.527 11:02:47 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:42.527 11:02:47 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:42.527 11:02:47 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 01:05:42.527 11:02:47 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:42.527 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 01:05:42.527 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:05:42.527 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:05:42.527 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:05:42.528 Cannot find device "nvmf_init_br" 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # true 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:05:42.528 Cannot find device "nvmf_tgt_br" 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:05:42.528 Cannot find device "nvmf_tgt_br2" 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:05:42.528 Cannot find device "nvmf_init_br" 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # true 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:05:42.528 Cannot find device "nvmf_tgt_br" 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:05:42.528 Cannot find device "nvmf_tgt_br2" 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:05:42.528 Cannot find device "nvmf_br" 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # true 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:05:42.528 Cannot find device "nvmf_init_if" 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # true 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:05:42.528 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:05:42.528 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:05:42.528 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:05:42.787 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:05:42.787 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:05:42.787 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:05:42.787 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:05:42.787 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:05:42.787 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:05:42.787 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:05:42.787 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:05:42.787 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:05:42.787 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:05:42.787 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:05:42.787 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:05:42.787 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:05:42.787 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:05:42.787 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:05:42.787 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:05:42.787 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:05:42.787 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:05:42.787 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:05:42.787 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:05:42.787 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:05:43.046 11:02:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:05:43.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:05:43.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 01:05:43.046 01:05:43.046 --- 10.0.0.2 ping statistics --- 01:05:43.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:05:43.046 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 01:05:43.046 11:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:05:43.046 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:05:43.046 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 01:05:43.046 01:05:43.046 --- 10.0.0.3 ping statistics --- 01:05:43.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:05:43.046 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 01:05:43.046 11:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:05:43.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:05:43.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 01:05:43.046 01:05:43.046 --- 10.0.0.1 ping statistics --- 01:05:43.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:05:43.046 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 01:05:43.046 11:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:05:43.046 11:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 01:05:43.046 11:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:05:43.046 11:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:05:43.046 11:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:05:43.046 11:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:05:43.046 11:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:05:43.046 11:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:05:43.046 11:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:05:43.046 11:02:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 01:05:43.046 11:02:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 01:05:43.046 11:02:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 01:05:43.046 11:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:05:43.046 11:02:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 01:05:43.046 11:02:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:05:43.046 11:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=77355 01:05:43.046 11:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 01:05:43.046 11:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 77355 01:05:43.046 11:02:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 77355 ']' 01:05:43.046 11:02:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:05:43.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:05:43.046 11:02:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 01:05:43.046 11:02:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:05:43.046 11:02:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 01:05:43.046 11:02:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:05:43.046 [2024-07-22 11:02:48.140089] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:43.047 [2024-07-22 11:02:48.140154] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:05:43.305 [2024-07-22 11:02:48.285062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:05:43.305 [2024-07-22 11:02:48.328664] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:05:43.305 [2024-07-22 11:02:48.328715] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:05:43.305 [2024-07-22 11:02:48.328725] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:05:43.305 [2024-07-22 11:02:48.328733] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:05:43.305 [2024-07-22 11:02:48.328740] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:05:43.305 [2024-07-22 11:02:48.328938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:05:43.305 [2024-07-22 11:02:48.329794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:05:43.305 [2024-07-22 11:02:48.330130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 01:05:43.305 [2024-07-22 11:02:48.330132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:05:43.305 [2024-07-22 11:02:48.370761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:43.873 11:02:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:05:43.873 11:02:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 01:05:43.873 11:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:05:43.873 11:02:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 01:05:43.873 11:02:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:05:43.873 11:02:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:05:43.873 11:02:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:05:43.873 11:02:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:43.873 11:02:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:05:43.873 [2024-07-22 11:02:49.027991] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:05:43.873 11:02:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:43.873 11:02:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 01:05:43.873 11:02:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 01:05:43.873 11:02:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:05:43.873 11:02:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 01:05:43.873 11:02:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 01:05:43.873 11:02:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 01:05:43.873 11:02:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:43.873 11:02:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:05:44.131 Malloc0 01:05:44.131 [2024-07-22 11:02:49.102992] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:05:44.131 11:02:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:44.131 11:02:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 01:05:44.131 11:02:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 01:05:44.131 11:02:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:05:44.131 11:02:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=77409 01:05:44.131 11:02:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 77409 /var/tmp/bdevperf.sock 01:05:44.131 11:02:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 77409 ']' 01:05:44.131 11:02:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 01:05:44.131 11:02:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:05:44.131 11:02:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 01:05:44.131 11:02:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 01:05:44.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:05:44.131 11:02:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 01:05:44.131 11:02:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:05:44.131 11:02:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 01:05:44.131 11:02:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 01:05:44.131 11:02:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:05:44.131 11:02:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:05:44.131 11:02:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:05:44.131 { 01:05:44.131 "params": { 01:05:44.131 "name": "Nvme$subsystem", 01:05:44.131 "trtype": "$TEST_TRANSPORT", 01:05:44.131 "traddr": "$NVMF_FIRST_TARGET_IP", 01:05:44.131 "adrfam": "ipv4", 01:05:44.131 "trsvcid": "$NVMF_PORT", 01:05:44.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:05:44.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:05:44.131 "hdgst": ${hdgst:-false}, 01:05:44.131 "ddgst": ${ddgst:-false} 01:05:44.131 }, 01:05:44.131 "method": "bdev_nvme_attach_controller" 01:05:44.131 } 01:05:44.131 EOF 01:05:44.131 )") 01:05:44.131 11:02:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 01:05:44.131 11:02:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 01:05:44.131 11:02:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 01:05:44.131 11:02:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:05:44.131 "params": { 01:05:44.131 "name": "Nvme0", 01:05:44.131 "trtype": "tcp", 01:05:44.131 "traddr": "10.0.0.2", 01:05:44.131 "adrfam": "ipv4", 01:05:44.131 "trsvcid": "4420", 01:05:44.131 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:05:44.131 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:05:44.131 "hdgst": false, 01:05:44.131 "ddgst": false 01:05:44.131 }, 01:05:44.131 "method": "bdev_nvme_attach_controller" 01:05:44.131 }' 01:05:44.131 [2024-07-22 11:02:49.219588] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:44.131 [2024-07-22 11:02:49.219652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77409 ] 01:05:44.388 [2024-07-22 11:02:49.361024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:44.388 [2024-07-22 11:02:49.402814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:44.388 [2024-07-22 11:02:49.452394] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:44.388 Running I/O for 10 seconds... 01:05:44.957 11:02:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:05:44.957 11:02:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 01:05:44.957 11:02:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 01:05:44.957 11:02:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:44.957 11:02:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:05:44.957 11:02:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:44.957 11:02:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:05:44.957 11:02:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 01:05:44.957 11:02:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 01:05:44.957 11:02:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 01:05:44.957 11:02:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 01:05:44.957 11:02:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 01:05:44.957 11:02:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 01:05:44.957 11:02:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 01:05:44.957 11:02:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 01:05:44.957 11:02:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 01:05:44.957 11:02:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:44.957 11:02:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:05:44.957 11:02:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:44.957 11:02:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1155 01:05:44.957 11:02:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1155 -ge 100 ']' 01:05:44.957 11:02:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 01:05:44.957 11:02:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 01:05:44.957 11:02:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 01:05:44.957 11:02:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 01:05:44.957 11:02:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:44.957 11:02:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:05:44.957 [2024-07-22 11:02:50.134230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.957 [2024-07-22 11:02:50.134273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.957 [2024-07-22 11:02:50.134293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.957 [2024-07-22 11:02:50.134302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.957 [2024-07-22 11:02:50.134312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.957 [2024-07-22 11:02:50.134321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.957 [2024-07-22 11:02:50.134331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.957 [2024-07-22 11:02:50.134340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.957 [2024-07-22 11:02:50.134350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.957 [2024-07-22 11:02:50.134358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.957 [2024-07-22 11:02:50.134368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.957 [2024-07-22 11:02:50.134376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.957 [2024-07-22 11:02:50.134386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.957 [2024-07-22 11:02:50.134395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.957 [2024-07-22 11:02:50.134405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.957 [2024-07-22 11:02:50.134413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.957 [2024-07-22 11:02:50.134423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.957 [2024-07-22 11:02:50.134431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.957 [2024-07-22 11:02:50.134441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.957 [2024-07-22 11:02:50.134450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.957 [2024-07-22 11:02:50.134460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.957 [2024-07-22 11:02:50.134468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.957 [2024-07-22 11:02:50.134478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.957 [2024-07-22 11:02:50.134486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.957 [2024-07-22 11:02:50.134496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.957 [2024-07-22 11:02:50.134512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.957 [2024-07-22 11:02:50.134522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.957 [2024-07-22 11:02:50.134531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.957 [2024-07-22 11:02:50.134541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.957 [2024-07-22 11:02:50.134549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.957 [2024-07-22 11:02:50.134559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.957 [2024-07-22 11:02:50.134567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.957 [2024-07-22 11:02:50.134578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.957 [2024-07-22 11:02:50.134586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.957 [2024-07-22 11:02:50.134600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.957 [2024-07-22 11:02:50.134608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.957 [2024-07-22 11:02:50.134618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.957 [2024-07-22 11:02:50.134626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.957 [2024-07-22 11:02:50.134636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.957 [2024-07-22 11:02:50.134644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.957 [2024-07-22 11:02:50.134654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.957 [2024-07-22 11:02:50.134663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.957 [2024-07-22 11:02:50.134673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.957 [2024-07-22 11:02:50.134681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.957 [2024-07-22 11:02:50.134691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.957 [2024-07-22 11:02:50.134699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.957 [2024-07-22 11:02:50.134709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.957 [2024-07-22 11:02:50.134718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.957 [2024-07-22 11:02:50.134728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.957 [2024-07-22 11:02:50.134736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.957 [2024-07-22 11:02:50.134747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.957 [2024-07-22 11:02:50.134755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.957 [2024-07-22 11:02:50.134765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.957 [2024-07-22 11:02:50.134774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.957 [2024-07-22 11:02:50.134784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.957 [2024-07-22 11:02:50.134792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.957 [2024-07-22 11:02:50.134802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.957 [2024-07-22 11:02:50.134810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.957 [2024-07-22 11:02:50.134820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.957 [2024-07-22 11:02:50.134828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.957 [2024-07-22 11:02:50.134838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.957 [2024-07-22 11:02:50.134855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.134866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.958 [2024-07-22 11:02:50.134874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.134884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.958 [2024-07-22 11:02:50.134893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.134917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.958 [2024-07-22 11:02:50.134925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.134936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.958 [2024-07-22 11:02:50.134944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.134954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.958 [2024-07-22 11:02:50.134962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.134972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.958 [2024-07-22 11:02:50.134980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.134990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.958 [2024-07-22 11:02:50.134999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.135009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.958 [2024-07-22 11:02:50.135018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.135028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.958 [2024-07-22 11:02:50.135037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.135047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.958 [2024-07-22 11:02:50.135055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.135065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.958 [2024-07-22 11:02:50.135073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.135083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.958 [2024-07-22 11:02:50.135091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.135108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.958 [2024-07-22 11:02:50.135116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.135126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.958 [2024-07-22 11:02:50.135134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.135145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.958 [2024-07-22 11:02:50.135153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.135163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.958 [2024-07-22 11:02:50.135171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.135181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.958 [2024-07-22 11:02:50.135190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.135200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.958 [2024-07-22 11:02:50.135209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.135220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.958 [2024-07-22 11:02:50.135229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.135240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.958 [2024-07-22 11:02:50.135249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.135259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.958 [2024-07-22 11:02:50.135267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.135277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.958 [2024-07-22 11:02:50.135286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.135296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.958 [2024-07-22 11:02:50.135304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.135314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.958 [2024-07-22 11:02:50.135323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.135333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.958 [2024-07-22 11:02:50.135341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.135351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.958 [2024-07-22 11:02:50.135359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.135369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.958 [2024-07-22 11:02:50.135377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.135389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.958 [2024-07-22 11:02:50.135397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.135407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.958 [2024-07-22 11:02:50.135415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.135425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.958 [2024-07-22 11:02:50.135434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.135444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.958 [2024-07-22 11:02:50.135452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.135462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.958 [2024-07-22 11:02:50.135470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.135480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:44.958 [2024-07-22 11:02:50.135489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.135498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19778d0 is same with the state(5) to be set 01:05:44.958 [2024-07-22 11:02:50.135554] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19778d0 was disconnected and freed. reset controller. 01:05:44.958 [2024-07-22 11:02:50.136476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:05:44.958 task offset: 29440 on job bdev=Nvme0n1 fails 01:05:44.958 01:05:44.958 Latency(us) 01:05:44.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:05:44.958 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 01:05:44.958 Job: Nvme0n1 ended in about 0.58 seconds with error 01:05:44.958 Verification LBA range: start 0x0 length 0x400 01:05:44.958 Nvme0n1 : 0.58 2090.48 130.65 110.03 0.00 28469.83 1776.58 27161.91 01:05:44.958 =================================================================================================================== 01:05:44.958 Total : 2090.48 130.65 110.03 0.00 28469.83 1776.58 27161.91 01:05:44.958 [2024-07-22 11:02:50.138273] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:05:44.958 [2024-07-22 11:02:50.138358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d70c0 (9): Bad file descriptor 01:05:44.958 11:02:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:44.958 11:02:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 01:05:44.958 11:02:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 01:05:44.958 11:02:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:05:44.958 [2024-07-22 11:02:50.141675] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 01:05:44.958 [2024-07-22 11:02:50.141897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 01:05:44.958 [2024-07-22 11:02:50.142022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:44.958 [2024-07-22 11:02:50.142132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 01:05:44.958 [2024-07-22 11:02:50.142217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 01:05:44.958 [2024-07-22 11:02:50.142228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:05:44.958 [2024-07-22 11:02:50.142237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14d70c0 01:05:44.958 [2024-07-22 11:02:50.142273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d70c0 (9): Bad file descriptor 01:05:44.958 [2024-07-22 11:02:50.142287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:05:44.958 [2024-07-22 11:02:50.142296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:05:44.958 [2024-07-22 11:02:50.142306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:05:44.958 [2024-07-22 11:02:50.142321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:05:44.958 11:02:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:05:44.958 11:02:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 01:05:46.418 11:02:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 77409 01:05:46.418 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (77409) - No such process 01:05:46.418 11:02:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 01:05:46.418 11:02:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 01:05:46.418 11:02:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 01:05:46.418 11:02:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 01:05:46.418 11:02:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 01:05:46.418 11:02:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 01:05:46.418 11:02:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:05:46.418 11:02:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:05:46.418 { 01:05:46.418 "params": { 01:05:46.418 "name": "Nvme$subsystem", 01:05:46.418 "trtype": "$TEST_TRANSPORT", 01:05:46.418 "traddr": "$NVMF_FIRST_TARGET_IP", 01:05:46.418 "adrfam": "ipv4", 01:05:46.418 "trsvcid": "$NVMF_PORT", 01:05:46.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:05:46.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:05:46.418 "hdgst": ${hdgst:-false}, 01:05:46.418 "ddgst": ${ddgst:-false} 01:05:46.418 }, 01:05:46.418 "method": "bdev_nvme_attach_controller" 01:05:46.418 } 01:05:46.418 EOF 01:05:46.418 )") 01:05:46.418 11:02:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 01:05:46.418 11:02:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 01:05:46.418 11:02:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 01:05:46.418 11:02:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:05:46.418 "params": { 01:05:46.418 "name": "Nvme0", 01:05:46.418 "trtype": "tcp", 01:05:46.418 "traddr": "10.0.0.2", 01:05:46.418 "adrfam": "ipv4", 01:05:46.418 "trsvcid": "4420", 01:05:46.418 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:05:46.418 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:05:46.418 "hdgst": false, 01:05:46.418 "ddgst": false 01:05:46.418 }, 01:05:46.418 "method": "bdev_nvme_attach_controller" 01:05:46.418 }' 01:05:46.418 [2024-07-22 11:02:51.212460] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:46.418 [2024-07-22 11:02:51.212526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77448 ] 01:05:46.418 [2024-07-22 11:02:51.355459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:46.418 [2024-07-22 11:02:51.396268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:46.418 [2024-07-22 11:02:51.445474] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:46.418 Running I/O for 1 seconds... 01:05:47.794 01:05:47.794 Latency(us) 01:05:47.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:05:47.794 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 01:05:47.794 Verification LBA range: start 0x0 length 0x400 01:05:47.794 Nvme0n1 : 1.02 2185.91 136.62 0.00 0.00 28817.70 3066.24 27161.91 01:05:47.794 =================================================================================================================== 01:05:47.794 Total : 2185.91 136.62 0.00 0.00 28817.70 3066.24 27161.91 01:05:47.794 11:02:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 01:05:47.794 11:02:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 01:05:47.794 11:02:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 01:05:47.794 11:02:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 01:05:47.794 11:02:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 01:05:47.794 11:02:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 01:05:47.794 11:02:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 01:05:47.794 11:02:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:05:47.794 11:02:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 01:05:47.794 11:02:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 01:05:47.794 11:02:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:05:47.794 rmmod nvme_tcp 01:05:47.794 rmmod nvme_fabrics 01:05:47.794 rmmod nvme_keyring 01:05:47.794 11:02:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:05:47.794 11:02:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 01:05:47.794 11:02:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 01:05:47.794 11:02:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 77355 ']' 01:05:47.794 11:02:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 77355 01:05:47.794 11:02:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 77355 ']' 01:05:47.794 11:02:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 77355 01:05:47.794 11:02:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 01:05:47.794 11:02:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:05:47.794 11:02:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77355 01:05:47.794 killing process with pid 77355 01:05:47.794 11:02:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:05:47.794 11:02:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:05:47.794 11:02:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77355' 01:05:47.794 11:02:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 77355 01:05:47.794 11:02:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 77355 01:05:48.054 [2024-07-22 11:02:53.079592] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 01:05:48.054 11:02:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:05:48.054 11:02:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:05:48.054 11:02:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:05:48.054 11:02:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:05:48.054 11:02:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 01:05:48.054 11:02:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:05:48.054 11:02:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:05:48.054 11:02:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:05:48.054 11:02:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:05:48.054 11:02:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 01:05:48.054 01:05:48.054 real 0m5.775s 01:05:48.054 user 0m21.225s 01:05:48.054 sys 0m1.704s 01:05:48.054 11:02:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 01:05:48.054 ************************************ 01:05:48.054 END TEST nvmf_host_management 01:05:48.054 ************************************ 01:05:48.054 11:02:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:05:48.054 11:02:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:05:48.054 11:02:53 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 01:05:48.054 11:02:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:05:48.054 11:02:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:05:48.054 11:02:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:05:48.054 ************************************ 01:05:48.054 START TEST nvmf_lvol 01:05:48.054 ************************************ 01:05:48.054 11:02:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 01:05:48.313 * Looking for test storage... 01:05:48.313 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 01:05:48.313 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:05:48.314 11:02:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:05:48.314 11:02:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:05:48.314 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:05:48.314 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:05:48.314 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:05:48.314 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:05:48.314 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:05:48.314 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 01:05:48.314 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:05:48.314 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:05:48.314 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:05:48.314 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:05:48.314 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:05:48.314 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:05:48.314 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:05:48.314 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:05:48.314 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:05:48.314 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:05:48.314 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:05:48.314 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:05:48.314 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:05:48.314 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:05:48.314 Cannot find device "nvmf_tgt_br" 01:05:48.314 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 01:05:48.314 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:05:48.314 Cannot find device "nvmf_tgt_br2" 01:05:48.314 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 01:05:48.314 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:05:48.314 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:05:48.314 Cannot find device "nvmf_tgt_br" 01:05:48.314 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 01:05:48.314 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:05:48.573 Cannot find device "nvmf_tgt_br2" 01:05:48.573 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 01:05:48.573 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:05:48.573 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:05:48.573 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:05:48.573 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:05:48.573 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 01:05:48.573 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:05:48.573 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:05:48.573 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 01:05:48.573 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:05:48.573 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:05:48.573 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:05:48.573 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:05:48.573 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:05:48.573 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:05:48.573 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:05:48.573 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:05:48.573 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:05:48.573 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:05:48.573 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:05:48.573 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:05:48.573 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:05:48.573 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:05:48.573 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:05:48.573 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:05:48.573 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:05:48.573 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:05:48.573 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:05:48.573 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:05:48.573 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:05:48.573 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:05:48.573 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:05:48.573 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:05:48.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:05:48.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 01:05:48.573 01:05:48.573 --- 10.0.0.2 ping statistics --- 01:05:48.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:05:48.573 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 01:05:48.573 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:05:48.831 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:05:48.831 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.107 ms 01:05:48.831 01:05:48.831 --- 10.0.0.3 ping statistics --- 01:05:48.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:05:48.831 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 01:05:48.831 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:05:48.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:05:48.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 01:05:48.831 01:05:48.831 --- 10.0.0.1 ping statistics --- 01:05:48.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:05:48.831 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 01:05:48.831 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:05:48.831 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 01:05:48.831 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:05:48.831 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:05:48.831 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:05:48.831 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:05:48.831 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:05:48.831 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:05:48.831 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:05:48.832 11:02:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 01:05:48.832 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:05:48.832 11:02:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 01:05:48.832 11:02:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:05:48.832 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=77670 01:05:48.832 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 77670 01:05:48.832 11:02:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 01:05:48.832 11:02:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 77670 ']' 01:05:48.832 11:02:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:05:48.832 11:02:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 01:05:48.832 11:02:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:05:48.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:05:48.832 11:02:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 01:05:48.832 11:02:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:05:48.832 [2024-07-22 11:02:53.879352] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:05:48.832 [2024-07-22 11:02:53.879410] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:05:48.832 [2024-07-22 11:02:54.023146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 01:05:49.090 [2024-07-22 11:02:54.064643] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:05:49.090 [2024-07-22 11:02:54.064696] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:05:49.090 [2024-07-22 11:02:54.064705] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:05:49.090 [2024-07-22 11:02:54.064713] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:05:49.090 [2024-07-22 11:02:54.064720] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:05:49.090 [2024-07-22 11:02:54.064927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:05:49.090 [2024-07-22 11:02:54.065135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:05:49.090 [2024-07-22 11:02:54.065137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:05:49.090 [2024-07-22 11:02:54.105925] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:05:49.657 11:02:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:05:49.657 11:02:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 01:05:49.657 11:02:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:05:49.657 11:02:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 01:05:49.657 11:02:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:05:49.657 11:02:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:05:49.657 11:02:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:05:49.915 [2024-07-22 11:02:54.959021] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:05:49.915 11:02:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:05:50.175 11:02:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 01:05:50.175 11:02:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:05:50.432 11:02:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 01:05:50.432 11:02:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 01:05:50.432 11:02:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 01:05:50.693 11:02:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ef3dc74f-e928-4ab4-adbf-0b9b77fbfc14 01:05:50.693 11:02:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ef3dc74f-e928-4ab4-adbf-0b9b77fbfc14 lvol 20 01:05:50.952 11:02:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=85faaba0-2fa1-48ba-ab68-e3d62c9627a5 01:05:50.952 11:02:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 01:05:50.952 11:02:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 85faaba0-2fa1-48ba-ab68-e3d62c9627a5 01:05:51.210 11:02:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:05:51.468 [2024-07-22 11:02:56.492389] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:05:51.468 11:02:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:05:51.727 11:02:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 01:05:51.727 11:02:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=77729 01:05:51.727 11:02:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 01:05:52.661 11:02:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 85faaba0-2fa1-48ba-ab68-e3d62c9627a5 MY_SNAPSHOT 01:05:52.919 11:02:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d0083220-9a12-4f2f-84a5-f6404be5d9c7 01:05:52.919 11:02:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 85faaba0-2fa1-48ba-ab68-e3d62c9627a5 30 01:05:53.177 11:02:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone d0083220-9a12-4f2f-84a5-f6404be5d9c7 MY_CLONE 01:05:53.177 11:02:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=70c42860-b76d-4496-949a-3a99bbafbd32 01:05:53.177 11:02:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 70c42860-b76d-4496-949a-3a99bbafbd32 01:05:53.742 11:02:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 77729 01:06:01.861 Initializing NVMe Controllers 01:06:01.861 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 01:06:01.861 Controller IO queue size 128, less than required. 01:06:01.861 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:06:01.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 01:06:01.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 01:06:01.861 Initialization complete. Launching workers. 01:06:01.861 ======================================================== 01:06:01.861 Latency(us) 01:06:01.861 Device Information : IOPS MiB/s Average min max 01:06:01.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12129.40 47.38 10557.61 2306.42 50692.13 01:06:01.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12059.30 47.11 10620.25 3828.54 49695.66 01:06:01.861 ======================================================== 01:06:01.861 Total : 24188.70 94.49 10588.84 2306.42 50692.13 01:06:01.861 01:06:01.861 11:03:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:06:02.120 11:03:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 85faaba0-2fa1-48ba-ab68-e3d62c9627a5 01:06:02.379 11:03:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ef3dc74f-e928-4ab4-adbf-0b9b77fbfc14 01:06:02.379 11:03:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 01:06:02.379 11:03:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 01:06:02.379 11:03:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 01:06:02.379 11:03:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 01:06:02.379 11:03:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 01:06:02.379 11:03:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:06:02.379 11:03:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 01:06:02.379 11:03:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 01:06:02.379 11:03:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:06:02.638 rmmod nvme_tcp 01:06:02.638 rmmod nvme_fabrics 01:06:02.638 rmmod nvme_keyring 01:06:02.638 11:03:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:06:02.638 11:03:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 01:06:02.638 11:03:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 01:06:02.638 11:03:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 77670 ']' 01:06:02.638 11:03:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 77670 01:06:02.638 11:03:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 77670 ']' 01:06:02.638 11:03:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 77670 01:06:02.638 11:03:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 01:06:02.638 11:03:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:06:02.638 11:03:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77670 01:06:02.638 11:03:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:06:02.638 11:03:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:06:02.638 killing process with pid 77670 01:06:02.638 11:03:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77670' 01:06:02.638 11:03:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 77670 01:06:02.638 11:03:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 77670 01:06:02.897 11:03:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:06:02.897 11:03:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:06:02.897 11:03:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:06:02.897 11:03:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:06:02.897 11:03:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 01:06:02.897 11:03:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:06:02.897 11:03:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:06:02.897 11:03:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:06:02.897 11:03:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:06:02.897 ************************************ 01:06:02.897 END TEST nvmf_lvol 01:06:02.897 01:06:02.897 real 0m14.710s 01:06:02.897 user 0m59.980s 01:06:02.897 sys 0m5.277s 01:06:02.897 11:03:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 01:06:02.897 11:03:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:06:02.897 ************************************ 01:06:02.897 11:03:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:06:02.897 11:03:08 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 01:06:02.897 11:03:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:06:02.897 11:03:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:06:02.897 11:03:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:06:02.897 ************************************ 01:06:02.897 START TEST nvmf_lvs_grow 01:06:02.897 ************************************ 01:06:02.897 11:03:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 01:06:03.157 * Looking for test storage... 01:06:03.157 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:06:03.157 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:06:03.158 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:06:03.158 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:06:03.158 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:06:03.158 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:06:03.158 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:06:03.158 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:06:03.158 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:06:03.158 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:06:03.158 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:06:03.158 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:06:03.158 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:06:03.158 Cannot find device "nvmf_tgt_br" 01:06:03.158 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 01:06:03.158 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:06:03.158 Cannot find device "nvmf_tgt_br2" 01:06:03.158 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 01:06:03.158 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:06:03.158 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:06:03.158 Cannot find device "nvmf_tgt_br" 01:06:03.158 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 01:06:03.158 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:06:03.158 Cannot find device "nvmf_tgt_br2" 01:06:03.158 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 01:06:03.158 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:06:03.158 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:06:03.158 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:06:03.158 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:06:03.158 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 01:06:03.158 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:06:03.158 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:06:03.158 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 01:06:03.158 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:06:03.417 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:06:03.417 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:06:03.417 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:06:03.417 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:06:03.417 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:06:03.417 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:06:03.417 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:06:03.417 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:06:03.417 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:06:03.417 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:06:03.417 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:06:03.417 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:06:03.417 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:06:03.417 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:06:03.417 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:06:03.417 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:06:03.417 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:06:03.417 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:06:03.417 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:06:03.417 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:06:03.417 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:06:03.417 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:06:03.417 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:06:03.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:06:03.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 01:06:03.417 01:06:03.417 --- 10.0.0.2 ping statistics --- 01:06:03.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:03.417 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 01:06:03.417 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:06:03.417 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:06:03.417 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.103 ms 01:06:03.417 01:06:03.417 --- 10.0.0.3 ping statistics --- 01:06:03.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:03.417 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 01:06:03.417 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:06:03.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:06:03.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 01:06:03.417 01:06:03.417 --- 10.0.0.1 ping statistics --- 01:06:03.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:03.417 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 01:06:03.417 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:06:03.417 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 01:06:03.417 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:06:03.418 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:06:03.418 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:06:03.418 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:06:03.418 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:06:03.418 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:06:03.418 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:06:03.418 11:03:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 01:06:03.418 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:06:03.418 11:03:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 01:06:03.418 11:03:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:06:03.418 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=78055 01:06:03.418 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 01:06:03.418 11:03:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 78055 01:06:03.418 11:03:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 78055 ']' 01:06:03.418 11:03:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:06:03.418 11:03:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 01:06:03.418 11:03:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:06:03.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:06:03.418 11:03:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 01:06:03.418 11:03:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:06:03.677 [2024-07-22 11:03:08.634794] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:06:03.677 [2024-07-22 11:03:08.634886] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:06:03.677 [2024-07-22 11:03:08.777956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:03.677 [2024-07-22 11:03:08.818527] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:06:03.677 [2024-07-22 11:03:08.818581] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:06:03.677 [2024-07-22 11:03:08.818590] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:06:03.677 [2024-07-22 11:03:08.818598] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:06:03.677 [2024-07-22 11:03:08.818620] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:06:03.677 [2024-07-22 11:03:08.818651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:06:03.677 [2024-07-22 11:03:08.859313] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:06:04.610 11:03:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:06:04.610 11:03:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 01:06:04.610 11:03:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:06:04.610 11:03:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 01:06:04.610 11:03:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:06:04.610 11:03:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:06:04.610 11:03:09 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:06:04.610 [2024-07-22 11:03:09.686756] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:06:04.610 11:03:09 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 01:06:04.610 11:03:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:06:04.610 11:03:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 01:06:04.610 11:03:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:06:04.610 ************************************ 01:06:04.610 START TEST lvs_grow_clean 01:06:04.610 ************************************ 01:06:04.610 11:03:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 01:06:04.610 11:03:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 01:06:04.610 11:03:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 01:06:04.610 11:03:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 01:06:04.610 11:03:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 01:06:04.610 11:03:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 01:06:04.610 11:03:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 01:06:04.610 11:03:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:06:04.610 11:03:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:06:04.610 11:03:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:06:04.867 11:03:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 01:06:04.867 11:03:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 01:06:05.124 11:03:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b8021819-0325-4000-b294-a7002f80f1d2 01:06:05.124 11:03:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8021819-0325-4000-b294-a7002f80f1d2 01:06:05.124 11:03:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 01:06:05.382 11:03:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 01:06:05.382 11:03:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 01:06:05.382 11:03:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b8021819-0325-4000-b294-a7002f80f1d2 lvol 150 01:06:05.382 11:03:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=08c898bc-6626-45ab-9a93-c9a9cf818c8b 01:06:05.382 11:03:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:06:05.382 11:03:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 01:06:05.640 [2024-07-22 11:03:10.701258] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 01:06:05.640 [2024-07-22 11:03:10.701321] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 01:06:05.640 true 01:06:05.640 11:03:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8021819-0325-4000-b294-a7002f80f1d2 01:06:05.640 11:03:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 01:06:05.930 11:03:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 01:06:05.930 11:03:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 01:06:05.930 11:03:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 08c898bc-6626-45ab-9a93-c9a9cf818c8b 01:06:06.207 11:03:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:06:06.464 [2024-07-22 11:03:11.444477] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:06:06.464 11:03:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:06:06.464 11:03:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=78132 01:06:06.464 11:03:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:06:06.464 11:03:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 78132 /var/tmp/bdevperf.sock 01:06:06.464 11:03:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 78132 ']' 01:06:06.464 11:03:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:06:06.464 11:03:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 01:06:06.464 11:03:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:06:06.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:06:06.464 11:03:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 01:06:06.464 11:03:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 01:06:06.464 11:03:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 01:06:06.722 [2024-07-22 11:03:11.675879] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:06:06.722 [2024-07-22 11:03:11.675942] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78132 ] 01:06:06.722 [2024-07-22 11:03:11.802125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:06.722 [2024-07-22 11:03:11.842197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:06:06.722 [2024-07-22 11:03:11.883142] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:06:07.656 11:03:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:06:07.656 11:03:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 01:06:07.656 11:03:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 01:06:07.656 Nvme0n1 01:06:07.656 11:03:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 01:06:07.916 [ 01:06:07.916 { 01:06:07.916 "name": "Nvme0n1", 01:06:07.916 "aliases": [ 01:06:07.916 "08c898bc-6626-45ab-9a93-c9a9cf818c8b" 01:06:07.916 ], 01:06:07.916 "product_name": "NVMe disk", 01:06:07.916 "block_size": 4096, 01:06:07.916 "num_blocks": 38912, 01:06:07.916 "uuid": "08c898bc-6626-45ab-9a93-c9a9cf818c8b", 01:06:07.916 "assigned_rate_limits": { 01:06:07.916 "rw_ios_per_sec": 0, 01:06:07.916 "rw_mbytes_per_sec": 0, 01:06:07.916 "r_mbytes_per_sec": 0, 01:06:07.916 "w_mbytes_per_sec": 0 01:06:07.916 }, 01:06:07.916 "claimed": false, 01:06:07.916 "zoned": false, 01:06:07.916 "supported_io_types": { 01:06:07.916 "read": true, 01:06:07.916 "write": true, 01:06:07.916 "unmap": true, 01:06:07.916 "flush": true, 01:06:07.916 "reset": true, 01:06:07.916 "nvme_admin": true, 01:06:07.916 "nvme_io": true, 01:06:07.916 "nvme_io_md": false, 01:06:07.916 "write_zeroes": true, 01:06:07.916 "zcopy": false, 01:06:07.916 "get_zone_info": false, 01:06:07.916 "zone_management": false, 01:06:07.916 "zone_append": false, 01:06:07.916 "compare": true, 01:06:07.916 "compare_and_write": true, 01:06:07.916 "abort": true, 01:06:07.916 "seek_hole": false, 01:06:07.916 "seek_data": false, 01:06:07.916 "copy": true, 01:06:07.916 "nvme_iov_md": false 01:06:07.916 }, 01:06:07.916 "memory_domains": [ 01:06:07.916 { 01:06:07.916 "dma_device_id": "system", 01:06:07.916 "dma_device_type": 1 01:06:07.916 } 01:06:07.916 ], 01:06:07.916 "driver_specific": { 01:06:07.916 "nvme": [ 01:06:07.916 { 01:06:07.916 "trid": { 01:06:07.916 "trtype": "TCP", 01:06:07.916 "adrfam": "IPv4", 01:06:07.916 "traddr": "10.0.0.2", 01:06:07.916 "trsvcid": "4420", 01:06:07.916 "subnqn": "nqn.2016-06.io.spdk:cnode0" 01:06:07.916 }, 01:06:07.917 "ctrlr_data": { 01:06:07.917 "cntlid": 1, 01:06:07.917 "vendor_id": "0x8086", 01:06:07.917 "model_number": "SPDK bdev Controller", 01:06:07.917 "serial_number": "SPDK0", 01:06:07.917 "firmware_revision": "24.09", 01:06:07.917 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:06:07.917 "oacs": { 01:06:07.917 "security": 0, 01:06:07.917 "format": 0, 01:06:07.917 "firmware": 0, 01:06:07.917 "ns_manage": 0 01:06:07.917 }, 01:06:07.917 "multi_ctrlr": true, 01:06:07.917 "ana_reporting": false 01:06:07.917 }, 01:06:07.917 "vs": { 01:06:07.917 "nvme_version": "1.3" 01:06:07.917 }, 01:06:07.917 "ns_data": { 01:06:07.917 "id": 1, 01:06:07.917 "can_share": true 01:06:07.917 } 01:06:07.917 } 01:06:07.917 ], 01:06:07.917 "mp_policy": "active_passive" 01:06:07.917 } 01:06:07.917 } 01:06:07.917 ] 01:06:07.917 11:03:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=78150 01:06:07.917 11:03:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:06:07.917 11:03:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 01:06:07.917 Running I/O for 10 seconds... 01:06:08.853 Latency(us) 01:06:08.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:08.853 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:08.853 Nvme0n1 : 1.00 9940.00 38.83 0.00 0.00 0.00 0.00 0.00 01:06:08.853 =================================================================================================================== 01:06:08.853 Total : 9940.00 38.83 0.00 0.00 0.00 0.00 0.00 01:06:08.853 01:06:09.787 11:03:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b8021819-0325-4000-b294-a7002f80f1d2 01:06:10.046 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:10.046 Nvme0n1 : 2.00 10298.50 40.23 0.00 0.00 0.00 0.00 0.00 01:06:10.046 =================================================================================================================== 01:06:10.046 Total : 10298.50 40.23 0.00 0.00 0.00 0.00 0.00 01:06:10.046 01:06:10.046 true 01:06:10.046 11:03:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 01:06:10.046 11:03:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8021819-0325-4000-b294-a7002f80f1d2 01:06:10.305 11:03:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 01:06:10.305 11:03:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 01:06:10.305 11:03:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 78150 01:06:10.874 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:10.874 Nvme0n1 : 3.00 9913.67 38.73 0.00 0.00 0.00 0.00 0.00 01:06:10.874 =================================================================================================================== 01:06:10.874 Total : 9913.67 38.73 0.00 0.00 0.00 0.00 0.00 01:06:10.874 01:06:12.252 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:12.252 Nvme0n1 : 4.00 9687.75 37.84 0.00 0.00 0.00 0.00 0.00 01:06:12.252 =================================================================================================================== 01:06:12.252 Total : 9687.75 37.84 0.00 0.00 0.00 0.00 0.00 01:06:12.252 01:06:13.188 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:13.188 Nvme0n1 : 5.00 9528.20 37.22 0.00 0.00 0.00 0.00 0.00 01:06:13.188 =================================================================================================================== 01:06:13.188 Total : 9528.20 37.22 0.00 0.00 0.00 0.00 0.00 01:06:13.188 01:06:14.129 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:14.129 Nvme0n1 : 6.00 9379.50 36.64 0.00 0.00 0.00 0.00 0.00 01:06:14.129 =================================================================================================================== 01:06:14.129 Total : 9379.50 36.64 0.00 0.00 0.00 0.00 0.00 01:06:14.129 01:06:15.091 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:15.091 Nvme0n1 : 7.00 9308.43 36.36 0.00 0.00 0.00 0.00 0.00 01:06:15.091 =================================================================================================================== 01:06:15.091 Total : 9308.43 36.36 0.00 0.00 0.00 0.00 0.00 01:06:15.091 01:06:16.027 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:16.027 Nvme0n1 : 8.00 9240.25 36.09 0.00 0.00 0.00 0.00 0.00 01:06:16.027 =================================================================================================================== 01:06:16.027 Total : 9240.25 36.09 0.00 0.00 0.00 0.00 0.00 01:06:16.027 01:06:16.963 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:16.963 Nvme0n1 : 9.00 9201.33 35.94 0.00 0.00 0.00 0.00 0.00 01:06:16.964 =================================================================================================================== 01:06:16.964 Total : 9201.33 35.94 0.00 0.00 0.00 0.00 0.00 01:06:16.964 01:06:17.901 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:17.901 Nvme0n1 : 10.00 9154.90 35.76 0.00 0.00 0.00 0.00 0.00 01:06:17.901 =================================================================================================================== 01:06:17.901 Total : 9154.90 35.76 0.00 0.00 0.00 0.00 0.00 01:06:17.901 01:06:17.901 01:06:17.901 Latency(us) 01:06:17.901 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:17.901 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:17.901 Nvme0n1 : 10.01 9162.59 35.79 0.00 0.00 13965.81 5579.77 35794.76 01:06:17.901 =================================================================================================================== 01:06:17.901 Total : 9162.59 35.79 0.00 0.00 13965.81 5579.77 35794.76 01:06:17.901 0 01:06:17.901 11:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 78132 01:06:17.901 11:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 78132 ']' 01:06:17.901 11:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 78132 01:06:17.901 11:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 01:06:17.901 11:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:06:17.901 11:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78132 01:06:17.901 killing process with pid 78132 01:06:17.901 Received shutdown signal, test time was about 10.000000 seconds 01:06:17.901 01:06:17.901 Latency(us) 01:06:17.901 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:17.902 =================================================================================================================== 01:06:17.902 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:06:17.902 11:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:06:17.902 11:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:06:17.902 11:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78132' 01:06:17.902 11:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 78132 01:06:17.902 11:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 78132 01:06:18.163 11:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:06:18.422 11:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:06:18.680 11:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8021819-0325-4000-b294-a7002f80f1d2 01:06:18.680 11:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 01:06:18.940 11:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 01:06:18.940 11:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 01:06:18.940 11:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:06:18.940 [2024-07-22 11:03:24.126564] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 01:06:19.198 11:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8021819-0325-4000-b294-a7002f80f1d2 01:06:19.198 11:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 01:06:19.198 11:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8021819-0325-4000-b294-a7002f80f1d2 01:06:19.198 11:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:06:19.198 11:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:06:19.198 11:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:06:19.198 11:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:06:19.198 11:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:06:19.198 11:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:06:19.198 11:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:06:19.198 11:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 01:06:19.198 11:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8021819-0325-4000-b294-a7002f80f1d2 01:06:19.198 request: 01:06:19.198 { 01:06:19.198 "uuid": "b8021819-0325-4000-b294-a7002f80f1d2", 01:06:19.198 "method": "bdev_lvol_get_lvstores", 01:06:19.198 "req_id": 1 01:06:19.198 } 01:06:19.198 Got JSON-RPC error response 01:06:19.198 response: 01:06:19.198 { 01:06:19.198 "code": -19, 01:06:19.198 "message": "No such device" 01:06:19.198 } 01:06:19.198 11:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 01:06:19.198 11:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:06:19.198 11:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:06:19.198 11:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:06:19.198 11:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:06:19.457 aio_bdev 01:06:19.457 11:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 08c898bc-6626-45ab-9a93-c9a9cf818c8b 01:06:19.457 11:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=08c898bc-6626-45ab-9a93-c9a9cf818c8b 01:06:19.457 11:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 01:06:19.457 11:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 01:06:19.457 11:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 01:06:19.457 11:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 01:06:19.457 11:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 01:06:19.715 11:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 08c898bc-6626-45ab-9a93-c9a9cf818c8b -t 2000 01:06:19.973 [ 01:06:19.973 { 01:06:19.973 "name": "08c898bc-6626-45ab-9a93-c9a9cf818c8b", 01:06:19.973 "aliases": [ 01:06:19.973 "lvs/lvol" 01:06:19.973 ], 01:06:19.973 "product_name": "Logical Volume", 01:06:19.973 "block_size": 4096, 01:06:19.973 "num_blocks": 38912, 01:06:19.973 "uuid": "08c898bc-6626-45ab-9a93-c9a9cf818c8b", 01:06:19.973 "assigned_rate_limits": { 01:06:19.973 "rw_ios_per_sec": 0, 01:06:19.973 "rw_mbytes_per_sec": 0, 01:06:19.973 "r_mbytes_per_sec": 0, 01:06:19.973 "w_mbytes_per_sec": 0 01:06:19.973 }, 01:06:19.973 "claimed": false, 01:06:19.973 "zoned": false, 01:06:19.973 "supported_io_types": { 01:06:19.973 "read": true, 01:06:19.973 "write": true, 01:06:19.973 "unmap": true, 01:06:19.973 "flush": false, 01:06:19.973 "reset": true, 01:06:19.973 "nvme_admin": false, 01:06:19.973 "nvme_io": false, 01:06:19.973 "nvme_io_md": false, 01:06:19.973 "write_zeroes": true, 01:06:19.973 "zcopy": false, 01:06:19.973 "get_zone_info": false, 01:06:19.973 "zone_management": false, 01:06:19.973 "zone_append": false, 01:06:19.973 "compare": false, 01:06:19.973 "compare_and_write": false, 01:06:19.973 "abort": false, 01:06:19.973 "seek_hole": true, 01:06:19.973 "seek_data": true, 01:06:19.973 "copy": false, 01:06:19.973 "nvme_iov_md": false 01:06:19.973 }, 01:06:19.973 "driver_specific": { 01:06:19.973 "lvol": { 01:06:19.973 "lvol_store_uuid": "b8021819-0325-4000-b294-a7002f80f1d2", 01:06:19.973 "base_bdev": "aio_bdev", 01:06:19.973 "thin_provision": false, 01:06:19.973 "num_allocated_clusters": 38, 01:06:19.973 "snapshot": false, 01:06:19.973 "clone": false, 01:06:19.974 "esnap_clone": false 01:06:19.974 } 01:06:19.974 } 01:06:19.974 } 01:06:19.974 ] 01:06:19.974 11:03:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 01:06:19.974 11:03:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8021819-0325-4000-b294-a7002f80f1d2 01:06:19.974 11:03:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 01:06:20.233 11:03:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 01:06:20.233 11:03:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 01:06:20.233 11:03:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8021819-0325-4000-b294-a7002f80f1d2 01:06:20.492 11:03:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 01:06:20.492 11:03:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 08c898bc-6626-45ab-9a93-c9a9cf818c8b 01:06:20.749 11:03:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b8021819-0325-4000-b294-a7002f80f1d2 01:06:21.007 11:03:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:06:21.266 11:03:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:06:21.577 ************************************ 01:06:21.577 END TEST lvs_grow_clean 01:06:21.577 ************************************ 01:06:21.577 01:06:21.577 real 0m16.907s 01:06:21.577 user 0m15.016s 01:06:21.577 sys 0m2.969s 01:06:21.577 11:03:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 01:06:21.577 11:03:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 01:06:21.577 11:03:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 01:06:21.577 11:03:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 01:06:21.577 11:03:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:06:21.577 11:03:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 01:06:21.577 11:03:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:06:21.577 ************************************ 01:06:21.577 START TEST lvs_grow_dirty 01:06:21.577 ************************************ 01:06:21.577 11:03:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 01:06:21.577 11:03:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 01:06:21.577 11:03:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 01:06:21.577 11:03:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 01:06:21.577 11:03:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 01:06:21.577 11:03:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 01:06:21.577 11:03:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 01:06:21.577 11:03:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:06:21.577 11:03:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:06:21.577 11:03:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:06:21.835 11:03:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 01:06:21.835 11:03:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 01:06:22.094 11:03:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=2db62487-bcfc-4128-8af6-be6d52d3cff9 01:06:22.094 11:03:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 01:06:22.094 11:03:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2db62487-bcfc-4128-8af6-be6d52d3cff9 01:06:22.353 11:03:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 01:06:22.353 11:03:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 01:06:22.353 11:03:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2db62487-bcfc-4128-8af6-be6d52d3cff9 lvol 150 01:06:22.612 11:03:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f5dffffa-3902-4234-bb8f-2a353e5a943a 01:06:22.612 11:03:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:06:22.612 11:03:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 01:06:22.870 [2024-07-22 11:03:27.821393] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 01:06:22.870 [2024-07-22 11:03:27.821475] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 01:06:22.870 true 01:06:22.870 11:03:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2db62487-bcfc-4128-8af6-be6d52d3cff9 01:06:22.870 11:03:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 01:06:22.870 11:03:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 01:06:22.871 11:03:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 01:06:23.129 11:03:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f5dffffa-3902-4234-bb8f-2a353e5a943a 01:06:23.388 11:03:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:06:23.647 [2024-07-22 11:03:28.696425] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:06:23.647 11:03:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:06:23.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:06:23.907 11:03:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=78391 01:06:23.907 11:03:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 01:06:23.907 11:03:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:06:23.907 11:03:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 78391 /var/tmp/bdevperf.sock 01:06:23.907 11:03:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 78391 ']' 01:06:23.907 11:03:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:06:23.907 11:03:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 01:06:23.907 11:03:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:06:23.907 11:03:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 01:06:23.907 11:03:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:06:23.907 [2024-07-22 11:03:28.954816] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:06:23.907 [2024-07-22 11:03:28.954935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78391 ] 01:06:23.907 [2024-07-22 11:03:29.098427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:24.166 [2024-07-22 11:03:29.158910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:06:24.166 [2024-07-22 11:03:29.204101] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:06:24.764 11:03:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:06:24.764 11:03:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 01:06:24.764 11:03:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 01:06:25.024 Nvme0n1 01:06:25.024 11:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 01:06:25.283 [ 01:06:25.283 { 01:06:25.283 "name": "Nvme0n1", 01:06:25.283 "aliases": [ 01:06:25.283 "f5dffffa-3902-4234-bb8f-2a353e5a943a" 01:06:25.283 ], 01:06:25.283 "product_name": "NVMe disk", 01:06:25.283 "block_size": 4096, 01:06:25.283 "num_blocks": 38912, 01:06:25.283 "uuid": "f5dffffa-3902-4234-bb8f-2a353e5a943a", 01:06:25.283 "assigned_rate_limits": { 01:06:25.283 "rw_ios_per_sec": 0, 01:06:25.283 "rw_mbytes_per_sec": 0, 01:06:25.283 "r_mbytes_per_sec": 0, 01:06:25.283 "w_mbytes_per_sec": 0 01:06:25.283 }, 01:06:25.283 "claimed": false, 01:06:25.283 "zoned": false, 01:06:25.283 "supported_io_types": { 01:06:25.283 "read": true, 01:06:25.283 "write": true, 01:06:25.283 "unmap": true, 01:06:25.283 "flush": true, 01:06:25.283 "reset": true, 01:06:25.283 "nvme_admin": true, 01:06:25.283 "nvme_io": true, 01:06:25.283 "nvme_io_md": false, 01:06:25.283 "write_zeroes": true, 01:06:25.283 "zcopy": false, 01:06:25.283 "get_zone_info": false, 01:06:25.283 "zone_management": false, 01:06:25.283 "zone_append": false, 01:06:25.283 "compare": true, 01:06:25.283 "compare_and_write": true, 01:06:25.283 "abort": true, 01:06:25.283 "seek_hole": false, 01:06:25.283 "seek_data": false, 01:06:25.283 "copy": true, 01:06:25.283 "nvme_iov_md": false 01:06:25.283 }, 01:06:25.283 "memory_domains": [ 01:06:25.283 { 01:06:25.283 "dma_device_id": "system", 01:06:25.283 "dma_device_type": 1 01:06:25.283 } 01:06:25.283 ], 01:06:25.283 "driver_specific": { 01:06:25.283 "nvme": [ 01:06:25.283 { 01:06:25.283 "trid": { 01:06:25.283 "trtype": "TCP", 01:06:25.283 "adrfam": "IPv4", 01:06:25.283 "traddr": "10.0.0.2", 01:06:25.283 "trsvcid": "4420", 01:06:25.283 "subnqn": "nqn.2016-06.io.spdk:cnode0" 01:06:25.283 }, 01:06:25.283 "ctrlr_data": { 01:06:25.283 "cntlid": 1, 01:06:25.283 "vendor_id": "0x8086", 01:06:25.283 "model_number": "SPDK bdev Controller", 01:06:25.283 "serial_number": "SPDK0", 01:06:25.283 "firmware_revision": "24.09", 01:06:25.283 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:06:25.283 "oacs": { 01:06:25.283 "security": 0, 01:06:25.283 "format": 0, 01:06:25.283 "firmware": 0, 01:06:25.283 "ns_manage": 0 01:06:25.283 }, 01:06:25.283 "multi_ctrlr": true, 01:06:25.283 "ana_reporting": false 01:06:25.283 }, 01:06:25.283 "vs": { 01:06:25.283 "nvme_version": "1.3" 01:06:25.283 }, 01:06:25.283 "ns_data": { 01:06:25.283 "id": 1, 01:06:25.283 "can_share": true 01:06:25.283 } 01:06:25.283 } 01:06:25.283 ], 01:06:25.283 "mp_policy": "active_passive" 01:06:25.283 } 01:06:25.283 } 01:06:25.283 ] 01:06:25.283 11:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=78413 01:06:25.283 11:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 01:06:25.283 11:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:06:25.283 Running I/O for 10 seconds... 01:06:26.221 Latency(us) 01:06:26.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:26.221 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:26.221 Nvme0n1 : 1.00 9652.00 37.70 0.00 0.00 0.00 0.00 0.00 01:06:26.221 =================================================================================================================== 01:06:26.221 Total : 9652.00 37.70 0.00 0.00 0.00 0.00 0.00 01:06:26.221 01:06:27.166 11:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2db62487-bcfc-4128-8af6-be6d52d3cff9 01:06:27.423 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:27.423 Nvme0n1 : 2.00 9511.50 37.15 0.00 0.00 0.00 0.00 0.00 01:06:27.423 =================================================================================================================== 01:06:27.423 Total : 9511.50 37.15 0.00 0.00 0.00 0.00 0.00 01:06:27.423 01:06:27.423 true 01:06:27.423 11:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2db62487-bcfc-4128-8af6-be6d52d3cff9 01:06:27.423 11:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 01:06:27.681 11:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 01:06:27.681 11:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 01:06:27.681 11:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 78413 01:06:28.249 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:28.249 Nvme0n1 : 3.00 9452.67 36.92 0.00 0.00 0.00 0.00 0.00 01:06:28.249 =================================================================================================================== 01:06:28.249 Total : 9452.67 36.92 0.00 0.00 0.00 0.00 0.00 01:06:28.249 01:06:29.627 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:29.627 Nvme0n1 : 4.00 9374.50 36.62 0.00 0.00 0.00 0.00 0.00 01:06:29.627 =================================================================================================================== 01:06:29.627 Total : 9374.50 36.62 0.00 0.00 0.00 0.00 0.00 01:06:29.627 01:06:30.192 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:30.192 Nvme0n1 : 5.00 9120.60 35.63 0.00 0.00 0.00 0.00 0.00 01:06:30.192 =================================================================================================================== 01:06:30.192 Total : 9120.60 35.63 0.00 0.00 0.00 0.00 0.00 01:06:30.192 01:06:31.568 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:31.568 Nvme0n1 : 6.00 8860.17 34.61 0.00 0.00 0.00 0.00 0.00 01:06:31.568 =================================================================================================================== 01:06:31.568 Total : 8860.17 34.61 0.00 0.00 0.00 0.00 0.00 01:06:31.568 01:06:32.501 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:32.501 Nvme0n1 : 7.00 8895.14 34.75 0.00 0.00 0.00 0.00 0.00 01:06:32.501 =================================================================================================================== 01:06:32.501 Total : 8895.14 34.75 0.00 0.00 0.00 0.00 0.00 01:06:32.501 01:06:33.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:33.439 Nvme0n1 : 8.00 8925.88 34.87 0.00 0.00 0.00 0.00 0.00 01:06:33.439 =================================================================================================================== 01:06:33.439 Total : 8925.88 34.87 0.00 0.00 0.00 0.00 0.00 01:06:33.439 01:06:34.381 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:34.381 Nvme0n1 : 9.00 8829.11 34.49 0.00 0.00 0.00 0.00 0.00 01:06:34.381 =================================================================================================================== 01:06:34.381 Total : 8829.11 34.49 0.00 0.00 0.00 0.00 0.00 01:06:34.381 01:06:35.319 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:35.319 Nvme0n1 : 10.00 8821.60 34.46 0.00 0.00 0.00 0.00 0.00 01:06:35.319 =================================================================================================================== 01:06:35.319 Total : 8821.60 34.46 0.00 0.00 0.00 0.00 0.00 01:06:35.319 01:06:35.319 01:06:35.319 Latency(us) 01:06:35.319 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:35.319 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:35.319 Nvme0n1 : 10.01 8828.46 34.49 0.00 0.00 14493.86 5790.33 308256.08 01:06:35.319 =================================================================================================================== 01:06:35.319 Total : 8828.46 34.49 0.00 0.00 14493.86 5790.33 308256.08 01:06:35.319 0 01:06:35.319 11:03:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 78391 01:06:35.319 11:03:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 78391 ']' 01:06:35.319 11:03:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 78391 01:06:35.319 11:03:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 01:06:35.319 11:03:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:06:35.319 11:03:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78391 01:06:35.319 11:03:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:06:35.319 11:03:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:06:35.319 killing process with pid 78391 01:06:35.319 11:03:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78391' 01:06:35.319 11:03:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 78391 01:06:35.319 Received shutdown signal, test time was about 10.000000 seconds 01:06:35.319 01:06:35.319 Latency(us) 01:06:35.319 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:35.319 =================================================================================================================== 01:06:35.319 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:06:35.319 11:03:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 78391 01:06:35.578 11:03:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:06:35.838 11:03:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:06:36.097 11:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2db62487-bcfc-4128-8af6-be6d52d3cff9 01:06:36.097 11:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 01:06:36.356 11:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 01:06:36.356 11:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 01:06:36.356 11:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 78055 01:06:36.356 11:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 78055 01:06:36.356 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 78055 Killed "${NVMF_APP[@]}" "$@" 01:06:36.356 11:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 01:06:36.356 11:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 01:06:36.356 11:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:06:36.356 11:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 01:06:36.356 11:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:06:36.356 11:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=78547 01:06:36.356 11:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 78547 01:06:36.356 11:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 01:06:36.356 11:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 78547 ']' 01:06:36.356 11:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:06:36.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:06:36.356 11:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 01:06:36.356 11:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:06:36.356 11:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 01:06:36.356 11:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:06:36.356 [2024-07-22 11:03:41.437423] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:06:36.356 [2024-07-22 11:03:41.437524] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:06:36.614 [2024-07-22 11:03:41.585208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:36.614 [2024-07-22 11:03:41.639499] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:06:36.614 [2024-07-22 11:03:41.639556] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:06:36.614 [2024-07-22 11:03:41.639566] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:06:36.614 [2024-07-22 11:03:41.639574] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:06:36.614 [2024-07-22 11:03:41.639581] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:06:36.614 [2024-07-22 11:03:41.639609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:06:36.614 [2024-07-22 11:03:41.682616] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:06:37.183 11:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:06:37.183 11:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 01:06:37.183 11:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:06:37.183 11:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 01:06:37.183 11:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:06:37.183 11:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:06:37.183 11:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:06:37.442 [2024-07-22 11:03:42.543773] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 01:06:37.442 [2024-07-22 11:03:42.544087] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 01:06:37.442 [2024-07-22 11:03:42.544951] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 01:06:37.442 11:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 01:06:37.442 11:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f5dffffa-3902-4234-bb8f-2a353e5a943a 01:06:37.442 11:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=f5dffffa-3902-4234-bb8f-2a353e5a943a 01:06:37.442 11:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 01:06:37.442 11:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 01:06:37.442 11:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 01:06:37.443 11:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 01:06:37.443 11:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 01:06:37.702 11:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f5dffffa-3902-4234-bb8f-2a353e5a943a -t 2000 01:06:37.962 [ 01:06:37.962 { 01:06:37.962 "name": "f5dffffa-3902-4234-bb8f-2a353e5a943a", 01:06:37.962 "aliases": [ 01:06:37.962 "lvs/lvol" 01:06:37.962 ], 01:06:37.962 "product_name": "Logical Volume", 01:06:37.962 "block_size": 4096, 01:06:37.962 "num_blocks": 38912, 01:06:37.962 "uuid": "f5dffffa-3902-4234-bb8f-2a353e5a943a", 01:06:37.962 "assigned_rate_limits": { 01:06:37.962 "rw_ios_per_sec": 0, 01:06:37.962 "rw_mbytes_per_sec": 0, 01:06:37.962 "r_mbytes_per_sec": 0, 01:06:37.962 "w_mbytes_per_sec": 0 01:06:37.962 }, 01:06:37.962 "claimed": false, 01:06:37.962 "zoned": false, 01:06:37.962 "supported_io_types": { 01:06:37.962 "read": true, 01:06:37.962 "write": true, 01:06:37.962 "unmap": true, 01:06:37.962 "flush": false, 01:06:37.962 "reset": true, 01:06:37.962 "nvme_admin": false, 01:06:37.962 "nvme_io": false, 01:06:37.962 "nvme_io_md": false, 01:06:37.962 "write_zeroes": true, 01:06:37.962 "zcopy": false, 01:06:37.962 "get_zone_info": false, 01:06:37.962 "zone_management": false, 01:06:37.962 "zone_append": false, 01:06:37.962 "compare": false, 01:06:37.962 "compare_and_write": false, 01:06:37.962 "abort": false, 01:06:37.962 "seek_hole": true, 01:06:37.962 "seek_data": true, 01:06:37.962 "copy": false, 01:06:37.962 "nvme_iov_md": false 01:06:37.962 }, 01:06:37.962 "driver_specific": { 01:06:37.962 "lvol": { 01:06:37.962 "lvol_store_uuid": "2db62487-bcfc-4128-8af6-be6d52d3cff9", 01:06:37.962 "base_bdev": "aio_bdev", 01:06:37.962 "thin_provision": false, 01:06:37.962 "num_allocated_clusters": 38, 01:06:37.962 "snapshot": false, 01:06:37.962 "clone": false, 01:06:37.962 "esnap_clone": false 01:06:37.962 } 01:06:37.962 } 01:06:37.962 } 01:06:37.962 ] 01:06:37.962 11:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 01:06:37.962 11:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2db62487-bcfc-4128-8af6-be6d52d3cff9 01:06:37.962 11:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 01:06:38.221 11:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 01:06:38.221 11:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2db62487-bcfc-4128-8af6-be6d52d3cff9 01:06:38.221 11:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 01:06:38.479 11:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 01:06:38.479 11:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:06:38.479 [2024-07-22 11:03:43.679525] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 01:06:38.737 11:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2db62487-bcfc-4128-8af6-be6d52d3cff9 01:06:38.737 11:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 01:06:38.737 11:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2db62487-bcfc-4128-8af6-be6d52d3cff9 01:06:38.737 11:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:06:38.737 11:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:06:38.737 11:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:06:38.737 11:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:06:38.737 11:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:06:38.737 11:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:06:38.737 11:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:06:38.737 11:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 01:06:38.737 11:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2db62487-bcfc-4128-8af6-be6d52d3cff9 01:06:38.737 request: 01:06:38.737 { 01:06:38.737 "uuid": "2db62487-bcfc-4128-8af6-be6d52d3cff9", 01:06:38.737 "method": "bdev_lvol_get_lvstores", 01:06:38.737 "req_id": 1 01:06:38.737 } 01:06:38.737 Got JSON-RPC error response 01:06:38.737 response: 01:06:38.737 { 01:06:38.737 "code": -19, 01:06:38.737 "message": "No such device" 01:06:38.737 } 01:06:38.996 11:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 01:06:38.996 11:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:06:38.996 11:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:06:38.996 11:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:06:38.996 11:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:06:38.996 aio_bdev 01:06:38.996 11:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f5dffffa-3902-4234-bb8f-2a353e5a943a 01:06:38.996 11:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=f5dffffa-3902-4234-bb8f-2a353e5a943a 01:06:38.996 11:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 01:06:38.996 11:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 01:06:38.996 11:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 01:06:38.996 11:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 01:06:38.996 11:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 01:06:39.254 11:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f5dffffa-3902-4234-bb8f-2a353e5a943a -t 2000 01:06:39.514 [ 01:06:39.514 { 01:06:39.514 "name": "f5dffffa-3902-4234-bb8f-2a353e5a943a", 01:06:39.514 "aliases": [ 01:06:39.514 "lvs/lvol" 01:06:39.514 ], 01:06:39.514 "product_name": "Logical Volume", 01:06:39.514 "block_size": 4096, 01:06:39.514 "num_blocks": 38912, 01:06:39.514 "uuid": "f5dffffa-3902-4234-bb8f-2a353e5a943a", 01:06:39.514 "assigned_rate_limits": { 01:06:39.514 "rw_ios_per_sec": 0, 01:06:39.514 "rw_mbytes_per_sec": 0, 01:06:39.514 "r_mbytes_per_sec": 0, 01:06:39.514 "w_mbytes_per_sec": 0 01:06:39.514 }, 01:06:39.514 "claimed": false, 01:06:39.514 "zoned": false, 01:06:39.514 "supported_io_types": { 01:06:39.514 "read": true, 01:06:39.514 "write": true, 01:06:39.514 "unmap": true, 01:06:39.514 "flush": false, 01:06:39.514 "reset": true, 01:06:39.514 "nvme_admin": false, 01:06:39.514 "nvme_io": false, 01:06:39.514 "nvme_io_md": false, 01:06:39.514 "write_zeroes": true, 01:06:39.514 "zcopy": false, 01:06:39.514 "get_zone_info": false, 01:06:39.514 "zone_management": false, 01:06:39.514 "zone_append": false, 01:06:39.514 "compare": false, 01:06:39.514 "compare_and_write": false, 01:06:39.514 "abort": false, 01:06:39.514 "seek_hole": true, 01:06:39.514 "seek_data": true, 01:06:39.514 "copy": false, 01:06:39.514 "nvme_iov_md": false 01:06:39.514 }, 01:06:39.514 "driver_specific": { 01:06:39.514 "lvol": { 01:06:39.514 "lvol_store_uuid": "2db62487-bcfc-4128-8af6-be6d52d3cff9", 01:06:39.514 "base_bdev": "aio_bdev", 01:06:39.514 "thin_provision": false, 01:06:39.514 "num_allocated_clusters": 38, 01:06:39.514 "snapshot": false, 01:06:39.514 "clone": false, 01:06:39.514 "esnap_clone": false 01:06:39.514 } 01:06:39.514 } 01:06:39.514 } 01:06:39.514 ] 01:06:39.514 11:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 01:06:39.514 11:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2db62487-bcfc-4128-8af6-be6d52d3cff9 01:06:39.514 11:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 01:06:39.773 11:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 01:06:39.773 11:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 01:06:39.773 11:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2db62487-bcfc-4128-8af6-be6d52d3cff9 01:06:40.033 11:03:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 01:06:40.033 11:03:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f5dffffa-3902-4234-bb8f-2a353e5a943a 01:06:40.033 11:03:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2db62487-bcfc-4128-8af6-be6d52d3cff9 01:06:40.292 11:03:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:06:40.551 11:03:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:06:40.811 ************************************ 01:06:40.811 END TEST lvs_grow_dirty 01:06:40.811 ************************************ 01:06:40.811 01:06:40.811 real 0m19.287s 01:06:40.811 user 0m38.328s 01:06:40.811 sys 0m8.276s 01:06:40.811 11:03:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 01:06:40.811 11:03:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:06:41.070 11:03:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 01:06:41.070 11:03:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 01:06:41.070 11:03:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 01:06:41.070 11:03:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 01:06:41.070 11:03:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 01:06:41.070 11:03:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 01:06:41.070 11:03:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 01:06:41.070 11:03:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 01:06:41.070 11:03:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 01:06:41.070 11:03:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 01:06:41.070 nvmf_trace.0 01:06:41.070 11:03:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 01:06:41.070 11:03:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 01:06:41.070 11:03:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 01:06:41.070 11:03:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 01:06:41.070 11:03:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:06:41.070 11:03:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 01:06:41.070 11:03:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 01:06:41.070 11:03:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:06:41.070 rmmod nvme_tcp 01:06:41.329 rmmod nvme_fabrics 01:06:41.329 rmmod nvme_keyring 01:06:41.329 11:03:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:06:41.329 11:03:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 01:06:41.329 11:03:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 01:06:41.329 11:03:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 78547 ']' 01:06:41.329 11:03:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 78547 01:06:41.329 11:03:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 78547 ']' 01:06:41.329 11:03:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 78547 01:06:41.329 11:03:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 01:06:41.329 11:03:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:06:41.329 11:03:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78547 01:06:41.329 11:03:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:06:41.329 11:03:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:06:41.329 11:03:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78547' 01:06:41.329 killing process with pid 78547 01:06:41.329 11:03:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 78547 01:06:41.329 11:03:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 78547 01:06:41.589 11:03:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:06:41.589 11:03:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:06:41.589 11:03:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:06:41.589 11:03:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:06:41.589 11:03:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 01:06:41.589 11:03:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:06:41.589 11:03:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:06:41.589 11:03:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:06:41.589 11:03:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:06:41.589 01:06:41.589 real 0m38.597s 01:06:41.589 user 0m58.952s 01:06:41.589 sys 0m12.116s 01:06:41.589 11:03:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 01:06:41.589 11:03:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:06:41.589 ************************************ 01:06:41.589 END TEST nvmf_lvs_grow 01:06:41.589 ************************************ 01:06:41.589 11:03:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:06:41.589 11:03:46 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 01:06:41.589 11:03:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:06:41.589 11:03:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:06:41.589 11:03:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:06:41.589 ************************************ 01:06:41.589 START TEST nvmf_bdev_io_wait 01:06:41.589 ************************************ 01:06:41.589 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 01:06:41.848 * Looking for test storage... 01:06:41.848 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:06:41.848 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:06:41.848 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 01:06:41.848 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:06:41.848 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:06:41.848 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:06:41.848 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:06:41.848 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:06:41.848 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:06:41.848 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:06:41.848 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:06:41.848 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:06:41.848 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:06:41.848 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:06:41.848 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:06:41.848 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:06:41.848 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:06:41.848 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:06:41.848 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:06:41.848 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:06:41.848 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:06:41.848 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:06:41.848 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:06:41.848 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:41.848 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:06:41.849 Cannot find device "nvmf_tgt_br" 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:06:41.849 Cannot find device "nvmf_tgt_br2" 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:06:41.849 Cannot find device "nvmf_tgt_br" 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:06:41.849 Cannot find device "nvmf_tgt_br2" 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 01:06:41.849 11:03:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:06:41.849 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:06:41.849 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:06:42.124 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:06:42.124 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 01:06:42.124 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:06:42.124 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:06:42.124 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 01:06:42.124 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:06:42.124 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:06:42.124 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:06:42.124 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:06:42.124 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:06:42.124 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:06:42.124 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:06:42.124 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:06:42.124 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:06:42.124 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:06:42.124 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:06:42.124 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:06:42.124 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:06:42.124 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:06:42.124 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:06:42.124 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:06:42.124 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:06:42.124 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:06:42.125 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:06:42.125 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:06:42.125 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:06:42.125 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:06:42.405 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:06:42.405 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:06:42.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:06:42.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 01:06:42.405 01:06:42.405 --- 10.0.0.2 ping statistics --- 01:06:42.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:42.405 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 01:06:42.405 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:06:42.405 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:06:42.405 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 01:06:42.405 01:06:42.405 --- 10.0.0.3 ping statistics --- 01:06:42.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:42.405 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 01:06:42.405 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:06:42.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:06:42.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 01:06:42.405 01:06:42.405 --- 10.0.0.1 ping statistics --- 01:06:42.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:42.405 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 01:06:42.405 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:06:42.405 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 01:06:42.405 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:06:42.405 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:06:42.405 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:06:42.405 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:06:42.405 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:06:42.405 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:06:42.405 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:06:42.405 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 01:06:42.405 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:06:42.405 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 01:06:42.405 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:06:42.405 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=78853 01:06:42.405 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 01:06:42.405 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 78853 01:06:42.405 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 78853 ']' 01:06:42.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:06:42.405 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:06:42.405 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 01:06:42.405 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:06:42.405 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 01:06:42.405 11:03:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:06:42.405 [2024-07-22 11:03:47.452707] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:06:42.405 [2024-07-22 11:03:47.452799] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:06:42.405 [2024-07-22 11:03:47.597979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:06:42.664 [2024-07-22 11:03:47.657068] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:06:42.664 [2024-07-22 11:03:47.657130] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:06:42.664 [2024-07-22 11:03:47.657140] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:06:42.664 [2024-07-22 11:03:47.657148] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:06:42.664 [2024-07-22 11:03:47.657155] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:06:42.664 [2024-07-22 11:03:47.657395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:06:42.664 [2024-07-22 11:03:47.657620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:06:42.664 [2024-07-22 11:03:47.658614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:06:42.664 [2024-07-22 11:03:47.658617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:06:43.233 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:06:43.233 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 01:06:43.233 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:06:43.233 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 01:06:43.233 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:06:43.233 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:06:43.233 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 01:06:43.233 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:43.233 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:06:43.233 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:43.233 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 01:06:43.233 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:43.233 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:06:43.493 [2024-07-22 11:03:48.472437] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:06:43.493 [2024-07-22 11:03:48.488124] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:06:43.493 Malloc0 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:06:43.493 [2024-07-22 11:03:48.561568] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=78888 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=78890 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:06:43.493 { 01:06:43.493 "params": { 01:06:43.493 "name": "Nvme$subsystem", 01:06:43.493 "trtype": "$TEST_TRANSPORT", 01:06:43.493 "traddr": "$NVMF_FIRST_TARGET_IP", 01:06:43.493 "adrfam": "ipv4", 01:06:43.493 "trsvcid": "$NVMF_PORT", 01:06:43.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:06:43.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:06:43.493 "hdgst": ${hdgst:-false}, 01:06:43.493 "ddgst": ${ddgst:-false} 01:06:43.493 }, 01:06:43.493 "method": "bdev_nvme_attach_controller" 01:06:43.493 } 01:06:43.493 EOF 01:06:43.493 )") 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:06:43.493 { 01:06:43.493 "params": { 01:06:43.493 "name": "Nvme$subsystem", 01:06:43.493 "trtype": "$TEST_TRANSPORT", 01:06:43.493 "traddr": "$NVMF_FIRST_TARGET_IP", 01:06:43.493 "adrfam": "ipv4", 01:06:43.493 "trsvcid": "$NVMF_PORT", 01:06:43.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:06:43.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:06:43.493 "hdgst": ${hdgst:-false}, 01:06:43.493 "ddgst": ${ddgst:-false} 01:06:43.493 }, 01:06:43.493 "method": "bdev_nvme_attach_controller" 01:06:43.493 } 01:06:43.493 EOF 01:06:43.493 )") 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=78892 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:06:43.493 { 01:06:43.493 "params": { 01:06:43.493 "name": "Nvme$subsystem", 01:06:43.493 "trtype": "$TEST_TRANSPORT", 01:06:43.493 "traddr": "$NVMF_FIRST_TARGET_IP", 01:06:43.493 "adrfam": "ipv4", 01:06:43.493 "trsvcid": "$NVMF_PORT", 01:06:43.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:06:43.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:06:43.493 "hdgst": ${hdgst:-false}, 01:06:43.493 "ddgst": ${ddgst:-false} 01:06:43.493 }, 01:06:43.493 "method": "bdev_nvme_attach_controller" 01:06:43.493 } 01:06:43.493 EOF 01:06:43.493 )") 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=78898 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:06:43.493 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:06:43.493 { 01:06:43.493 "params": { 01:06:43.493 "name": "Nvme$subsystem", 01:06:43.493 "trtype": "$TEST_TRANSPORT", 01:06:43.494 "traddr": "$NVMF_FIRST_TARGET_IP", 01:06:43.494 "adrfam": "ipv4", 01:06:43.494 "trsvcid": "$NVMF_PORT", 01:06:43.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:06:43.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:06:43.494 "hdgst": ${hdgst:-false}, 01:06:43.494 "ddgst": ${ddgst:-false} 01:06:43.494 }, 01:06:43.494 "method": "bdev_nvme_attach_controller" 01:06:43.494 } 01:06:43.494 EOF 01:06:43.494 )") 01:06:43.494 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 01:06:43.494 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:06:43.494 "params": { 01:06:43.494 "name": "Nvme1", 01:06:43.494 "trtype": "tcp", 01:06:43.494 "traddr": "10.0.0.2", 01:06:43.494 "adrfam": "ipv4", 01:06:43.494 "trsvcid": "4420", 01:06:43.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:06:43.494 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:06:43.494 "hdgst": false, 01:06:43.494 "ddgst": false 01:06:43.494 }, 01:06:43.494 "method": "bdev_nvme_attach_controller" 01:06:43.494 }' 01:06:43.494 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 01:06:43.494 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 01:06:43.494 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:06:43.494 "params": { 01:06:43.494 "name": "Nvme1", 01:06:43.494 "trtype": "tcp", 01:06:43.494 "traddr": "10.0.0.2", 01:06:43.494 "adrfam": "ipv4", 01:06:43.494 "trsvcid": "4420", 01:06:43.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:06:43.494 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:06:43.494 "hdgst": false, 01:06:43.494 "ddgst": false 01:06:43.494 }, 01:06:43.494 "method": "bdev_nvme_attach_controller" 01:06:43.494 }' 01:06:43.494 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 01:06:43.494 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 01:06:43.494 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:06:43.494 "params": { 01:06:43.494 "name": "Nvme1", 01:06:43.494 "trtype": "tcp", 01:06:43.494 "traddr": "10.0.0.2", 01:06:43.494 "adrfam": "ipv4", 01:06:43.494 "trsvcid": "4420", 01:06:43.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:06:43.494 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:06:43.494 "hdgst": false, 01:06:43.494 "ddgst": false 01:06:43.494 }, 01:06:43.494 "method": "bdev_nvme_attach_controller" 01:06:43.494 }' 01:06:43.494 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 01:06:43.494 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 01:06:43.494 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:06:43.494 "params": { 01:06:43.494 "name": "Nvme1", 01:06:43.494 "trtype": "tcp", 01:06:43.494 "traddr": "10.0.0.2", 01:06:43.494 "adrfam": "ipv4", 01:06:43.494 "trsvcid": "4420", 01:06:43.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:06:43.494 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:06:43.494 "hdgst": false, 01:06:43.494 "ddgst": false 01:06:43.494 }, 01:06:43.494 "method": "bdev_nvme_attach_controller" 01:06:43.494 }' 01:06:43.494 [2024-07-22 11:03:48.622315] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:06:43.494 [2024-07-22 11:03:48.622408] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 01:06:43.494 [2024-07-22 11:03:48.633164] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:06:43.494 [2024-07-22 11:03:48.634121] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 01:06:43.494 [2024-07-22 11:03:48.642058] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:06:43.494 [2024-07-22 11:03:48.642143] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 01:06:43.494 [2024-07-22 11:03:48.648833] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:06:43.494 [2024-07-22 11:03:48.648935] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 01:06:43.494 11:03:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 78888 01:06:43.752 [2024-07-22 11:03:48.812276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:43.752 [2024-07-22 11:03:48.847564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 01:06:43.752 [2024-07-22 11:03:48.881913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:43.752 [2024-07-22 11:03:48.912324] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:06:43.752 [2024-07-22 11:03:48.918365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 01:06:43.752 [2024-07-22 11:03:48.931438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:44.009 [2024-07-22 11:03:48.958953] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:06:44.009 [2024-07-22 11:03:48.965714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 01:06:44.009 Running I/O for 1 seconds... 01:06:44.009 [2024-07-22 11:03:49.004197] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:06:44.009 [2024-07-22 11:03:49.007983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:44.009 [2024-07-22 11:03:49.036769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 01:06:44.009 Running I/O for 1 seconds... 01:06:44.009 [2024-07-22 11:03:49.075090] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:06:44.009 Running I/O for 1 seconds... 01:06:44.009 Running I/O for 1 seconds... 01:06:44.943 01:06:44.943 Latency(us) 01:06:44.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:44.943 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 01:06:44.943 Nvme1n1 : 1.02 7244.66 28.30 0.00 0.00 17426.54 7790.62 30530.83 01:06:44.943 =================================================================================================================== 01:06:44.943 Total : 7244.66 28.30 0.00 0.00 17426.54 7790.62 30530.83 01:06:44.943 01:06:44.943 Latency(us) 01:06:44.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:44.943 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 01:06:44.943 Nvme1n1 : 1.01 10586.00 41.35 0.00 0.00 12040.33 6079.85 22108.53 01:06:44.943 =================================================================================================================== 01:06:44.943 Total : 10586.00 41.35 0.00 0.00 12040.33 6079.85 22108.53 01:06:44.943 01:06:44.943 Latency(us) 01:06:44.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:44.943 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 01:06:44.943 Nvme1n1 : 1.00 219426.10 857.13 0.00 0.00 581.30 292.81 1190.97 01:06:44.943 =================================================================================================================== 01:06:44.943 Total : 219426.10 857.13 0.00 0.00 581.30 292.81 1190.97 01:06:45.203 01:06:45.203 Latency(us) 01:06:45.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:45.203 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 01:06:45.203 Nvme1n1 : 1.00 7845.13 30.65 0.00 0.00 16276.83 4158.51 42322.04 01:06:45.203 =================================================================================================================== 01:06:45.203 Total : 7845.13 30.65 0.00 0.00 16276.83 4158.51 42322.04 01:06:45.203 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 78890 01:06:45.203 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 78892 01:06:45.203 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 78898 01:06:45.203 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:06:45.203 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:45.203 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:06:45.203 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:45.203 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 01:06:45.203 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 01:06:45.203 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 01:06:45.203 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 01:06:45.462 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:06:45.462 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 01:06:45.462 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 01:06:45.462 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:06:45.462 rmmod nvme_tcp 01:06:45.462 rmmod nvme_fabrics 01:06:45.462 rmmod nvme_keyring 01:06:45.462 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:06:45.462 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 01:06:45.462 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 01:06:45.462 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 78853 ']' 01:06:45.462 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 78853 01:06:45.462 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 78853 ']' 01:06:45.462 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 78853 01:06:45.462 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 01:06:45.462 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:06:45.462 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78853 01:06:45.462 killing process with pid 78853 01:06:45.462 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:06:45.462 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:06:45.462 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78853' 01:06:45.462 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 78853 01:06:45.463 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 78853 01:06:45.722 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:06:45.722 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:06:45.722 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:06:45.722 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:06:45.722 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 01:06:45.722 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:06:45.722 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:06:45.722 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:06:45.722 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:06:45.722 01:06:45.722 real 0m4.120s 01:06:45.722 user 0m16.668s 01:06:45.722 sys 0m2.444s 01:06:45.722 ************************************ 01:06:45.722 END TEST nvmf_bdev_io_wait 01:06:45.722 ************************************ 01:06:45.722 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 01:06:45.722 11:03:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:06:45.722 11:03:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:06:45.722 11:03:50 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 01:06:45.722 11:03:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:06:45.722 11:03:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:06:45.722 11:03:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:06:45.722 ************************************ 01:06:45.722 START TEST nvmf_queue_depth 01:06:45.722 ************************************ 01:06:45.722 11:03:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 01:06:45.981 * Looking for test storage... 01:06:45.981 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:06:45.981 11:03:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:06:45.981 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 01:06:45.981 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:06:45.981 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:06:45.981 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:06:45.981 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:06:45.981 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:06:45.981 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:06:45.982 Cannot find device "nvmf_tgt_br" 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:06:45.982 Cannot find device "nvmf_tgt_br2" 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:06:45.982 Cannot find device "nvmf_tgt_br" 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 01:06:45.982 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:06:46.241 Cannot find device "nvmf_tgt_br2" 01:06:46.241 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 01:06:46.241 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:06:46.241 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:06:46.241 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:06:46.241 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:06:46.241 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 01:06:46.241 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:06:46.241 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:06:46.241 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 01:06:46.241 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:06:46.241 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:06:46.241 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:06:46.241 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:06:46.241 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:06:46.241 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:06:46.241 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:06:46.241 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:06:46.241 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:06:46.241 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:06:46.241 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:06:46.241 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:06:46.241 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:06:46.241 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:06:46.241 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:06:46.241 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:06:46.241 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:06:46.241 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:06:46.241 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:06:46.241 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:06:46.500 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:06:46.500 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:06:46.500 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:06:46.500 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:06:46.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:06:46.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 01:06:46.500 01:06:46.500 --- 10.0.0.2 ping statistics --- 01:06:46.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:46.500 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 01:06:46.500 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:06:46.500 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:06:46.500 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.108 ms 01:06:46.500 01:06:46.500 --- 10.0.0.3 ping statistics --- 01:06:46.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:46.500 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 01:06:46.500 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:06:46.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:06:46.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 01:06:46.500 01:06:46.500 --- 10.0.0.1 ping statistics --- 01:06:46.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:46.500 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 01:06:46.500 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:06:46.500 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 01:06:46.500 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:06:46.500 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:06:46.500 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:06:46.500 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:06:46.500 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:06:46.500 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:06:46.500 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:06:46.500 11:03:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 01:06:46.500 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:06:46.500 11:03:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 01:06:46.500 11:03:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:06:46.500 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=79134 01:06:46.500 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:06:46.500 11:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 79134 01:06:46.500 11:03:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 79134 ']' 01:06:46.500 11:03:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:06:46.500 11:03:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 01:06:46.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:06:46.500 11:03:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:06:46.500 11:03:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 01:06:46.500 11:03:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:06:46.500 [2024-07-22 11:03:51.603871] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:06:46.500 [2024-07-22 11:03:51.603958] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:06:46.760 [2024-07-22 11:03:51.743550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:46.760 [2024-07-22 11:03:51.795900] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:06:46.760 [2024-07-22 11:03:51.795959] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:06:46.760 [2024-07-22 11:03:51.795969] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:06:46.760 [2024-07-22 11:03:51.795977] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:06:46.760 [2024-07-22 11:03:51.795984] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:06:46.760 [2024-07-22 11:03:51.796012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:06:46.760 [2024-07-22 11:03:51.838766] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:06:47.329 11:03:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:06:47.329 11:03:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 01:06:47.329 11:03:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:06:47.329 11:03:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 01:06:47.329 11:03:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:06:47.329 11:03:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:06:47.329 11:03:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:06:47.329 11:03:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:47.329 11:03:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:06:47.589 [2024-07-22 11:03:52.536620] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:06:47.589 11:03:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:47.589 11:03:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:06:47.589 11:03:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:47.589 11:03:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:06:47.589 Malloc0 01:06:47.589 11:03:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:47.589 11:03:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:06:47.589 11:03:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:47.589 11:03:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:06:47.589 11:03:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:47.589 11:03:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:06:47.589 11:03:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:47.589 11:03:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:06:47.589 11:03:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:47.589 11:03:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:06:47.589 11:03:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:47.589 11:03:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:06:47.589 [2024-07-22 11:03:52.605638] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:06:47.589 11:03:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:47.589 11:03:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=79166 01:06:47.589 11:03:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 01:06:47.589 11:03:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:06:47.589 11:03:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 79166 /var/tmp/bdevperf.sock 01:06:47.589 11:03:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 79166 ']' 01:06:47.589 11:03:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:06:47.589 11:03:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 01:06:47.589 11:03:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:06:47.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:06:47.589 11:03:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 01:06:47.589 11:03:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:06:47.589 [2024-07-22 11:03:52.660750] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:06:47.589 [2024-07-22 11:03:52.660836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79166 ] 01:06:47.849 [2024-07-22 11:03:52.805982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:47.849 [2024-07-22 11:03:52.861264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:06:47.849 [2024-07-22 11:03:52.905493] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:06:48.415 11:03:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:06:48.415 11:03:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 01:06:48.415 11:03:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:06:48.415 11:03:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:48.415 11:03:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:06:48.674 NVMe0n1 01:06:48.674 11:03:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:48.674 11:03:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:06:48.674 Running I/O for 10 seconds... 01:06:58.659 01:06:58.659 Latency(us) 01:06:58.659 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:58.659 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 01:06:58.659 Verification LBA range: start 0x0 length 0x4000 01:06:58.659 NVMe0n1 : 10.07 10396.95 40.61 0.00 0.00 98129.03 19371.28 72431.76 01:06:58.659 =================================================================================================================== 01:06:58.659 Total : 10396.95 40.61 0.00 0.00 98129.03 19371.28 72431.76 01:06:58.659 0 01:06:58.659 11:04:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 79166 01:06:58.659 11:04:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 79166 ']' 01:06:58.659 11:04:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 79166 01:06:58.659 11:04:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 01:06:58.659 11:04:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:06:58.659 11:04:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79166 01:06:58.659 killing process with pid 79166 01:06:58.659 Received shutdown signal, test time was about 10.000000 seconds 01:06:58.659 01:06:58.659 Latency(us) 01:06:58.659 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:58.659 =================================================================================================================== 01:06:58.659 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:06:58.659 11:04:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:06:58.659 11:04:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:06:58.659 11:04:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79166' 01:06:58.659 11:04:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 79166 01:06:58.659 11:04:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 79166 01:06:58.919 11:04:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 01:06:58.919 11:04:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 01:06:58.919 11:04:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 01:06:58.919 11:04:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 01:06:58.919 11:04:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:06:58.919 11:04:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 01:06:58.919 11:04:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 01:06:58.919 11:04:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:06:58.919 rmmod nvme_tcp 01:06:58.919 rmmod nvme_fabrics 01:06:59.177 rmmod nvme_keyring 01:06:59.177 11:04:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:06:59.177 11:04:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 01:06:59.177 11:04:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 01:06:59.177 11:04:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 79134 ']' 01:06:59.177 11:04:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 79134 01:06:59.177 11:04:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 79134 ']' 01:06:59.177 11:04:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 79134 01:06:59.177 11:04:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 01:06:59.177 11:04:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:06:59.177 11:04:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79134 01:06:59.177 killing process with pid 79134 01:06:59.177 11:04:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:06:59.177 11:04:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:06:59.177 11:04:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79134' 01:06:59.177 11:04:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 79134 01:06:59.177 11:04:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 79134 01:06:59.437 11:04:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:06:59.437 11:04:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:06:59.437 11:04:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:06:59.437 11:04:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:06:59.437 11:04:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 01:06:59.437 11:04:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:06:59.437 11:04:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:06:59.437 11:04:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:06:59.437 11:04:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:06:59.437 01:06:59.437 real 0m13.575s 01:06:59.437 user 0m22.787s 01:06:59.437 sys 0m2.684s 01:06:59.437 11:04:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 01:06:59.437 ************************************ 01:06:59.437 END TEST nvmf_queue_depth 01:06:59.437 11:04:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:06:59.437 ************************************ 01:06:59.437 11:04:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:06:59.437 11:04:04 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 01:06:59.437 11:04:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:06:59.437 11:04:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:06:59.437 11:04:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:06:59.437 ************************************ 01:06:59.437 START TEST nvmf_target_multipath 01:06:59.437 ************************************ 01:06:59.437 11:04:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 01:06:59.696 * Looking for test storage... 01:06:59.696 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:06:59.696 11:04:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:06:59.696 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 01:06:59.696 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:06:59.696 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:06:59.696 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:06:59.696 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:06:59.696 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:06:59.696 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:06:59.696 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:06:59.697 Cannot find device "nvmf_tgt_br" 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:06:59.697 Cannot find device "nvmf_tgt_br2" 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:06:59.697 Cannot find device "nvmf_tgt_br" 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:06:59.697 Cannot find device "nvmf_tgt_br2" 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 01:06:59.697 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:06:59.957 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:06:59.957 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:06:59.957 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:06:59.957 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 01:06:59.957 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:06:59.957 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:06:59.957 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 01:06:59.957 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:06:59.957 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:06:59.957 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:06:59.957 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:06:59.957 11:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:06:59.957 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:06:59.957 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:06:59.957 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:06:59.957 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:06:59.957 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:06:59.957 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:06:59.957 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:06:59.957 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:06:59.957 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:06:59.957 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:06:59.957 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:06:59.957 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:06:59.957 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:06:59.957 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:06:59.957 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:06:59.957 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:06:59.957 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:06:59.957 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:06:59.957 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:06:59.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:06:59.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 01:06:59.957 01:06:59.957 --- 10.0.0.2 ping statistics --- 01:06:59.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:59.957 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 01:06:59.957 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:06:59.957 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:06:59.957 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 01:06:59.957 01:06:59.957 --- 10.0.0.3 ping statistics --- 01:06:59.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:59.957 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 01:06:59.957 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:06:59.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:06:59.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 01:06:59.957 01:06:59.957 --- 10.0.0.1 ping statistics --- 01:06:59.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:59.957 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 01:06:59.957 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:06:59.957 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 01:06:59.957 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:06:59.957 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:06:59.957 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:06:59.957 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:06:59.957 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:06:59.957 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:06:59.957 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:07:00.216 11:04:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 01:07:00.216 11:04:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 01:07:00.216 11:04:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 01:07:00.216 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:07:00.216 11:04:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 01:07:00.216 11:04:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:07:00.216 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=79476 01:07:00.216 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 79476 01:07:00.216 11:04:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:07:00.216 11:04:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 79476 ']' 01:07:00.216 11:04:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:07:00.216 11:04:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 01:07:00.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:07:00.216 11:04:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:07:00.216 11:04:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 01:07:00.216 11:04:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:07:00.216 [2024-07-22 11:04:05.243669] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:07:00.216 [2024-07-22 11:04:05.243753] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:07:00.216 [2024-07-22 11:04:05.388586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:07:00.474 [2024-07-22 11:04:05.447033] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:07:00.474 [2024-07-22 11:04:05.447098] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:07:00.474 [2024-07-22 11:04:05.447108] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:07:00.474 [2024-07-22 11:04:05.447116] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:07:00.474 [2024-07-22 11:04:05.447124] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:07:00.474 [2024-07-22 11:04:05.447324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:07:00.474 [2024-07-22 11:04:05.448315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:07:00.474 [2024-07-22 11:04:05.448400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:07:00.474 [2024-07-22 11:04:05.448402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:07:00.474 [2024-07-22 11:04:05.492995] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:07:01.040 11:04:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:07:01.040 11:04:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 01:07:01.040 11:04:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:07:01.040 11:04:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 01:07:01.040 11:04:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:07:01.040 11:04:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:07:01.040 11:04:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:07:01.297 [2024-07-22 11:04:06.357914] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:07:01.297 11:04:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:07:01.556 Malloc0 01:07:01.556 11:04:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 01:07:01.815 11:04:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:07:02.074 11:04:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:07:02.074 [2024-07-22 11:04:07.233392] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:07:02.074 11:04:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:07:02.332 [2024-07-22 11:04:07.441244] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:07:02.332 11:04:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid=7758934d-ca6b-403e-9e5d-3518ecb16acb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 01:07:02.589 11:04:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid=7758934d-ca6b-403e-9e5d-3518ecb16acb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 01:07:02.589 11:04:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 01:07:02.589 11:04:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 01:07:02.589 11:04:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:07:02.589 11:04:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:07:02.589 11:04:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=79569 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 01:07:05.130 11:04:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 01:07:05.130 [global] 01:07:05.130 thread=1 01:07:05.130 invalidate=1 01:07:05.130 rw=randrw 01:07:05.130 time_based=1 01:07:05.130 runtime=6 01:07:05.130 ioengine=libaio 01:07:05.130 direct=1 01:07:05.130 bs=4096 01:07:05.130 iodepth=128 01:07:05.130 norandommap=0 01:07:05.130 numjobs=1 01:07:05.130 01:07:05.130 verify_dump=1 01:07:05.130 verify_backlog=512 01:07:05.130 verify_state_save=0 01:07:05.130 do_verify=1 01:07:05.130 verify=crc32c-intel 01:07:05.130 [job0] 01:07:05.131 filename=/dev/nvme0n1 01:07:05.131 Could not set queue depth (nvme0n1) 01:07:05.131 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:07:05.131 fio-3.35 01:07:05.131 Starting 1 thread 01:07:05.698 11:04:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 01:07:05.957 11:04:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:07:06.216 11:04:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 01:07:06.216 11:04:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 01:07:06.216 11:04:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:07:06.216 11:04:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:07:06.216 11:04:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:07:06.216 11:04:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:07:06.216 11:04:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 01:07:06.216 11:04:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 01:07:06.216 11:04:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:07:06.216 11:04:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:07:06.216 11:04:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:07:06.216 11:04:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:07:06.216 11:04:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:07:06.216 11:04:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 01:07:06.475 11:04:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 01:07:06.475 11:04:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 01:07:06.475 11:04:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:07:06.475 11:04:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:07:06.475 11:04:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:07:06.475 11:04:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:07:06.475 11:04:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 01:07:06.475 11:04:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 01:07:06.475 11:04:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:07:06.475 11:04:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:07:06.475 11:04:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:07:06.475 11:04:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:07:06.475 11:04:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 79569 01:07:11.807 01:07:11.807 job0: (groupid=0, jobs=1): err= 0: pid=79596: Mon Jul 22 11:04:16 2024 01:07:11.807 read: IOPS=11.4k, BW=44.4MiB/s (46.6MB/s)(267MiB/6006msec) 01:07:11.807 slat (usec): min=4, max=6834, avg=47.69, stdev=167.21 01:07:11.807 clat (usec): min=1135, max=15924, avg=7679.68, stdev=1367.93 01:07:11.807 lat (usec): min=1186, max=16345, avg=7727.36, stdev=1373.75 01:07:11.807 clat percentiles (usec): 01:07:11.807 | 1.00th=[ 4490], 5.00th=[ 5473], 10.00th=[ 6390], 20.00th=[ 6980], 01:07:11.807 | 30.00th=[ 7242], 40.00th=[ 7439], 50.00th=[ 7570], 60.00th=[ 7701], 01:07:11.807 | 70.00th=[ 7898], 80.00th=[ 8160], 90.00th=[ 8848], 95.00th=[10945], 01:07:11.807 | 99.00th=[11994], 99.50th=[12256], 99.90th=[13698], 99.95th=[14746], 01:07:11.807 | 99.99th=[15401] 01:07:11.807 bw ( KiB/s): min= 3224, max=29512, per=52.23%, avg=23756.27, stdev=7640.79, samples=11 01:07:11.807 iops : min= 806, max= 7378, avg=5939.00, stdev=1910.18, samples=11 01:07:11.807 write: IOPS=6765, BW=26.4MiB/s (27.7MB/s)(141MiB/5328msec); 0 zone resets 01:07:11.807 slat (usec): min=6, max=1879, avg=62.10, stdev=112.03 01:07:11.807 clat (usec): min=757, max=16981, avg=6575.13, stdev=1227.65 01:07:11.807 lat (usec): min=811, max=17014, avg=6637.24, stdev=1229.83 01:07:11.807 clat percentiles (usec): 01:07:11.807 | 1.00th=[ 3818], 5.00th=[ 4555], 10.00th=[ 4948], 20.00th=[ 5735], 01:07:11.807 | 30.00th=[ 6194], 40.00th=[ 6521], 50.00th=[ 6718], 60.00th=[ 6915], 01:07:11.807 | 70.00th=[ 7046], 80.00th=[ 7308], 90.00th=[ 7635], 95.00th=[ 8029], 01:07:11.807 | 99.00th=[10683], 99.50th=[11600], 99.90th=[14353], 99.95th=[14615], 01:07:11.807 | 99.99th=[15270] 01:07:11.807 bw ( KiB/s): min= 3504, max=28808, per=87.83%, avg=23770.36, stdev=7435.52, samples=11 01:07:11.807 iops : min= 876, max= 7202, avg=5942.55, stdev=1858.87, samples=11 01:07:11.807 lat (usec) : 1000=0.01% 01:07:11.807 lat (msec) : 2=0.06%, 4=0.72%, 10=93.87%, 20=5.34% 01:07:11.807 cpu : usr=7.29%, sys=31.32%, ctx=6331, majf=0, minf=133 01:07:11.807 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 01:07:11.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:07:11.807 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:07:11.807 issued rwts: total=68296,36049,0,0 short=0,0,0,0 dropped=0,0,0,0 01:07:11.807 latency : target=0, window=0, percentile=100.00%, depth=128 01:07:11.807 01:07:11.807 Run status group 0 (all jobs): 01:07:11.807 READ: bw=44.4MiB/s (46.6MB/s), 44.4MiB/s-44.4MiB/s (46.6MB/s-46.6MB/s), io=267MiB (280MB), run=6006-6006msec 01:07:11.807 WRITE: bw=26.4MiB/s (27.7MB/s), 26.4MiB/s-26.4MiB/s (27.7MB/s-27.7MB/s), io=141MiB (148MB), run=5328-5328msec 01:07:11.807 01:07:11.807 Disk stats (read/write): 01:07:11.807 nvme0n1: ios=67357/35301, merge=0/0, ticks=479995/206912, in_queue=686907, util=98.70% 01:07:11.807 11:04:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 01:07:11.807 11:04:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 01:07:11.807 11:04:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 01:07:11.807 11:04:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 01:07:11.807 11:04:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:07:11.807 11:04:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:07:11.807 11:04:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:07:11.807 11:04:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 01:07:11.807 11:04:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 01:07:11.807 11:04:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 01:07:11.807 11:04:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:07:11.807 11:04:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:07:11.807 11:04:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:07:11.807 11:04:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 01:07:11.807 11:04:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 01:07:11.807 11:04:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=79675 01:07:11.807 11:04:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 01:07:11.807 11:04:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 01:07:11.807 [global] 01:07:11.807 thread=1 01:07:11.807 invalidate=1 01:07:11.807 rw=randrw 01:07:11.807 time_based=1 01:07:11.807 runtime=6 01:07:11.807 ioengine=libaio 01:07:11.807 direct=1 01:07:11.807 bs=4096 01:07:11.807 iodepth=128 01:07:11.807 norandommap=0 01:07:11.807 numjobs=1 01:07:11.807 01:07:11.807 verify_dump=1 01:07:11.807 verify_backlog=512 01:07:11.807 verify_state_save=0 01:07:11.807 do_verify=1 01:07:11.807 verify=crc32c-intel 01:07:11.807 [job0] 01:07:11.807 filename=/dev/nvme0n1 01:07:11.807 Could not set queue depth (nvme0n1) 01:07:11.807 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:07:11.807 fio-3.35 01:07:11.807 Starting 1 thread 01:07:12.374 11:04:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 01:07:12.634 11:04:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:07:12.893 11:04:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 01:07:12.893 11:04:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 01:07:12.893 11:04:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:07:12.893 11:04:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:07:12.893 11:04:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:07:12.893 11:04:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:07:12.893 11:04:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 01:07:12.893 11:04:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 01:07:12.893 11:04:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:07:12.893 11:04:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:07:12.893 11:04:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:07:12.893 11:04:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:07:12.893 11:04:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:07:13.151 11:04:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 01:07:13.410 11:04:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 01:07:13.410 11:04:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 01:07:13.410 11:04:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:07:13.410 11:04:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:07:13.410 11:04:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:07:13.410 11:04:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:07:13.410 11:04:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 01:07:13.410 11:04:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 01:07:13.410 11:04:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:07:13.410 11:04:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:07:13.410 11:04:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:07:13.410 11:04:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:07:13.410 11:04:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 79675 01:07:18.682 01:07:18.682 job0: (groupid=0, jobs=1): err= 0: pid=79696: Mon Jul 22 11:04:22 2024 01:07:18.682 read: IOPS=11.3k, BW=44.3MiB/s (46.5MB/s)(266MiB/6005msec) 01:07:18.682 slat (usec): min=6, max=7019, avg=43.82, stdev=160.00 01:07:18.682 clat (usec): min=424, max=21266, avg=7735.84, stdev=2352.65 01:07:18.682 lat (usec): min=441, max=21278, avg=7779.66, stdev=2357.62 01:07:18.682 clat percentiles (usec): 01:07:18.682 | 1.00th=[ 1663], 5.00th=[ 4228], 10.00th=[ 5276], 20.00th=[ 6652], 01:07:18.682 | 30.00th=[ 7242], 40.00th=[ 7504], 50.00th=[ 7701], 60.00th=[ 7832], 01:07:18.682 | 70.00th=[ 8029], 80.00th=[ 8356], 90.00th=[10552], 95.00th=[12125], 01:07:18.682 | 99.00th=[16188], 99.50th=[16909], 99.90th=[18744], 99.95th=[19530], 01:07:18.682 | 99.99th=[20579] 01:07:18.682 bw ( KiB/s): min=13296, max=34184, per=52.88%, avg=23993.18, stdev=7083.92, samples=11 01:07:18.682 iops : min= 3324, max= 8546, avg=5998.27, stdev=1770.96, samples=11 01:07:18.682 write: IOPS=6846, BW=26.7MiB/s (28.0MB/s)(141MiB/5267msec); 0 zone resets 01:07:18.682 slat (usec): min=14, max=1407, avg=55.26, stdev=94.40 01:07:18.682 clat (usec): min=493, max=19255, avg=6517.48, stdev=2373.59 01:07:18.682 lat (usec): min=527, max=19288, avg=6572.74, stdev=2377.18 01:07:18.682 clat percentiles (usec): 01:07:18.682 | 1.00th=[ 1287], 5.00th=[ 2704], 10.00th=[ 3916], 20.00th=[ 4817], 01:07:18.682 | 30.00th=[ 5669], 40.00th=[ 6325], 50.00th=[ 6718], 60.00th=[ 6980], 01:07:18.682 | 70.00th=[ 7177], 80.00th=[ 7504], 90.00th=[ 8029], 95.00th=[11207], 01:07:18.682 | 99.00th=[14746], 99.50th=[15270], 99.90th=[16057], 99.95th=[16319], 01:07:18.682 | 99.99th=[19006] 01:07:18.682 bw ( KiB/s): min=13904, max=33880, per=87.64%, avg=23999.91, stdev=6867.07, samples=11 01:07:18.682 iops : min= 3476, max= 8470, avg=5999.91, stdev=1716.72, samples=11 01:07:18.682 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.14% 01:07:18.682 lat (msec) : 2=2.03%, 4=4.50%, 10=84.22%, 20=9.04%, 50=0.02% 01:07:18.682 cpu : usr=7.33%, sys=30.90%, ctx=6702, majf=0, minf=151 01:07:18.682 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 01:07:18.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:07:18.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:07:18.682 issued rwts: total=68112,36059,0,0 short=0,0,0,0 dropped=0,0,0,0 01:07:18.682 latency : target=0, window=0, percentile=100.00%, depth=128 01:07:18.682 01:07:18.682 Run status group 0 (all jobs): 01:07:18.682 READ: bw=44.3MiB/s (46.5MB/s), 44.3MiB/s-44.3MiB/s (46.5MB/s-46.5MB/s), io=266MiB (279MB), run=6005-6005msec 01:07:18.682 WRITE: bw=26.7MiB/s (28.0MB/s), 26.7MiB/s-26.7MiB/s (28.0MB/s-28.0MB/s), io=141MiB (148MB), run=5267-5267msec 01:07:18.682 01:07:18.682 Disk stats (read/write): 01:07:18.682 nvme0n1: ios=67147/35388, merge=0/0, ticks=484573/209398, in_queue=693971, util=98.71% 01:07:18.682 11:04:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:07:18.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 01:07:18.682 11:04:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:07:18.682 11:04:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 01:07:18.682 11:04:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 01:07:18.682 11:04:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:07:18.682 11:04:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:07:18.682 11:04:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 01:07:18.682 11:04:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 01:07:18.682 11:04:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:07:18.682 11:04:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 01:07:18.682 11:04:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 01:07:18.682 11:04:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 01:07:18.682 11:04:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 01:07:18.682 11:04:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 01:07:18.682 11:04:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 01:07:18.682 11:04:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:07:18.682 11:04:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 01:07:18.682 11:04:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 01:07:18.682 11:04:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:07:18.682 rmmod nvme_tcp 01:07:18.682 rmmod nvme_fabrics 01:07:18.682 rmmod nvme_keyring 01:07:18.682 11:04:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:07:18.682 11:04:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 01:07:18.682 11:04:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 01:07:18.682 11:04:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 79476 ']' 01:07:18.682 11:04:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 79476 01:07:18.682 11:04:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 79476 ']' 01:07:18.682 11:04:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 79476 01:07:18.682 11:04:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 01:07:18.682 11:04:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:07:18.682 11:04:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79476 01:07:18.682 11:04:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:07:18.682 11:04:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:07:18.682 11:04:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79476' 01:07:18.682 killing process with pid 79476 01:07:18.682 11:04:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 79476 01:07:18.682 11:04:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 79476 01:07:18.682 11:04:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:07:18.682 11:04:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:07:18.682 11:04:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:07:18.682 11:04:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:07:18.682 11:04:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 01:07:18.682 11:04:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:07:18.682 11:04:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:07:18.683 11:04:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:07:18.683 11:04:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:07:18.683 01:07:18.683 real 0m19.072s 01:07:18.683 user 1m10.736s 01:07:18.683 sys 0m10.350s 01:07:18.683 11:04:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 01:07:18.683 ************************************ 01:07:18.683 END TEST nvmf_target_multipath 01:07:18.683 ************************************ 01:07:18.683 11:04:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:07:18.683 11:04:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:07:18.683 11:04:23 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 01:07:18.683 11:04:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:07:18.683 11:04:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:07:18.683 11:04:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:07:18.683 ************************************ 01:07:18.683 START TEST nvmf_zcopy 01:07:18.683 ************************************ 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 01:07:18.683 * Looking for test storage... 01:07:18.683 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 01:07:18.683 11:04:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:07:18.942 Cannot find device "nvmf_tgt_br" 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:07:18.942 Cannot find device "nvmf_tgt_br2" 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:07:18.942 Cannot find device "nvmf_tgt_br" 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 01:07:18.942 11:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:07:18.942 Cannot find device "nvmf_tgt_br2" 01:07:18.942 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 01:07:18.942 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:07:18.942 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:07:18.942 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:07:18.942 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:07:18.942 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 01:07:18.942 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:07:18.942 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:07:18.942 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 01:07:18.942 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:07:18.942 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:07:18.942 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:07:18.942 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:07:19.201 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:07:19.201 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:07:19.201 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:07:19.201 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:07:19.201 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:07:19.201 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:07:19.201 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:07:19.201 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:07:19.201 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:07:19.201 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:07:19.201 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:07:19.201 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:07:19.201 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:07:19.201 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:07:19.201 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:07:19.201 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:07:19.201 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:07:19.201 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:07:19.201 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:07:19.201 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:07:19.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:07:19.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 01:07:19.201 01:07:19.201 --- 10.0.0.2 ping statistics --- 01:07:19.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:19.201 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 01:07:19.201 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:07:19.201 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:07:19.201 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 01:07:19.201 01:07:19.201 --- 10.0.0.3 ping statistics --- 01:07:19.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:19.201 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 01:07:19.201 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:07:19.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:07:19.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 01:07:19.201 01:07:19.201 --- 10.0.0.1 ping statistics --- 01:07:19.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:19.201 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 01:07:19.201 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:07:19.201 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 01:07:19.201 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:07:19.201 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:07:19.201 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:07:19.201 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:07:19.201 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:07:19.201 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:07:19.201 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:07:19.460 11:04:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 01:07:19.460 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:07:19.460 11:04:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 01:07:19.460 11:04:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:07:19.460 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=79948 01:07:19.460 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:07:19.460 11:04:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 79948 01:07:19.460 11:04:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 79948 ']' 01:07:19.460 11:04:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:07:19.460 11:04:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 01:07:19.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:07:19.460 11:04:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:07:19.460 11:04:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 01:07:19.460 11:04:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:07:19.460 [2024-07-22 11:04:24.479823] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:07:19.460 [2024-07-22 11:04:24.479947] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:07:19.460 [2024-07-22 11:04:24.623745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:19.719 [2024-07-22 11:04:24.708595] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:07:19.719 [2024-07-22 11:04:24.708656] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:07:19.719 [2024-07-22 11:04:24.708666] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:07:19.719 [2024-07-22 11:04:24.708674] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:07:19.719 [2024-07-22 11:04:24.708682] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:07:19.720 [2024-07-22 11:04:24.708710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:07:19.720 [2024-07-22 11:04:24.793297] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:07:20.288 11:04:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:07:20.288 11:04:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 01:07:20.288 11:04:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:07:20.288 11:04:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 01:07:20.288 11:04:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:07:20.288 11:04:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:07:20.288 11:04:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 01:07:20.288 11:04:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 01:07:20.288 11:04:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:20.288 11:04:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:07:20.288 [2024-07-22 11:04:25.419019] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:07:20.288 11:04:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:20.288 11:04:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 01:07:20.288 11:04:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:20.288 11:04:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:07:20.288 11:04:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:20.288 11:04:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:07:20.288 11:04:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:20.288 11:04:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:07:20.288 [2024-07-22 11:04:25.443156] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:07:20.288 11:04:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:20.288 11:04:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:07:20.288 11:04:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:20.288 11:04:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:07:20.288 11:04:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:20.288 11:04:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 01:07:20.288 11:04:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:20.288 11:04:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:07:20.288 malloc0 01:07:20.288 11:04:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:20.288 11:04:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:07:20.288 11:04:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:20.288 11:04:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:07:20.547 11:04:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:20.547 11:04:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 01:07:20.547 11:04:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 01:07:20.547 11:04:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 01:07:20.547 11:04:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 01:07:20.547 11:04:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:07:20.547 11:04:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:07:20.547 { 01:07:20.547 "params": { 01:07:20.547 "name": "Nvme$subsystem", 01:07:20.547 "trtype": "$TEST_TRANSPORT", 01:07:20.547 "traddr": "$NVMF_FIRST_TARGET_IP", 01:07:20.547 "adrfam": "ipv4", 01:07:20.547 "trsvcid": "$NVMF_PORT", 01:07:20.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:07:20.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:07:20.547 "hdgst": ${hdgst:-false}, 01:07:20.547 "ddgst": ${ddgst:-false} 01:07:20.547 }, 01:07:20.547 "method": "bdev_nvme_attach_controller" 01:07:20.547 } 01:07:20.547 EOF 01:07:20.547 )") 01:07:20.547 11:04:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 01:07:20.547 11:04:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 01:07:20.547 11:04:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 01:07:20.547 11:04:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:07:20.547 "params": { 01:07:20.547 "name": "Nvme1", 01:07:20.547 "trtype": "tcp", 01:07:20.547 "traddr": "10.0.0.2", 01:07:20.547 "adrfam": "ipv4", 01:07:20.547 "trsvcid": "4420", 01:07:20.547 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:07:20.547 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:07:20.547 "hdgst": false, 01:07:20.547 "ddgst": false 01:07:20.547 }, 01:07:20.547 "method": "bdev_nvme_attach_controller" 01:07:20.547 }' 01:07:20.547 [2024-07-22 11:04:25.548321] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:07:20.547 [2024-07-22 11:04:25.548417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79983 ] 01:07:20.547 [2024-07-22 11:04:25.693952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:20.547 [2024-07-22 11:04:25.748154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:07:20.807 [2024-07-22 11:04:25.802134] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:07:20.807 Running I/O for 10 seconds... 01:07:30.792 01:07:30.792 Latency(us) 01:07:30.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:07:30.792 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 01:07:30.793 Verification LBA range: start 0x0 length 0x1000 01:07:30.793 Nvme1n1 : 10.01 7835.14 61.21 0.00 0.00 16286.75 1776.58 25161.61 01:07:30.793 =================================================================================================================== 01:07:30.793 Total : 7835.14 61.21 0.00 0.00 16286.75 1776.58 25161.61 01:07:31.052 11:04:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=80099 01:07:31.052 11:04:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 01:07:31.052 11:04:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:07:31.052 11:04:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 01:07:31.052 11:04:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 01:07:31.052 11:04:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 01:07:31.052 11:04:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 01:07:31.052 11:04:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:07:31.052 11:04:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:07:31.052 { 01:07:31.052 "params": { 01:07:31.052 "name": "Nvme$subsystem", 01:07:31.052 "trtype": "$TEST_TRANSPORT", 01:07:31.052 "traddr": "$NVMF_FIRST_TARGET_IP", 01:07:31.052 "adrfam": "ipv4", 01:07:31.052 "trsvcid": "$NVMF_PORT", 01:07:31.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:07:31.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:07:31.052 "hdgst": ${hdgst:-false}, 01:07:31.052 "ddgst": ${ddgst:-false} 01:07:31.052 }, 01:07:31.052 "method": "bdev_nvme_attach_controller" 01:07:31.052 } 01:07:31.052 EOF 01:07:31.052 )") 01:07:31.052 11:04:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 01:07:31.052 [2024-07-22 11:04:36.109317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.052 [2024-07-22 11:04:36.109385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.052 11:04:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 01:07:31.052 11:04:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 01:07:31.052 11:04:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:07:31.052 "params": { 01:07:31.052 "name": "Nvme1", 01:07:31.052 "trtype": "tcp", 01:07:31.052 "traddr": "10.0.0.2", 01:07:31.052 "adrfam": "ipv4", 01:07:31.052 "trsvcid": "4420", 01:07:31.052 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:07:31.052 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:07:31.052 "hdgst": false, 01:07:31.052 "ddgst": false 01:07:31.052 }, 01:07:31.052 "method": "bdev_nvme_attach_controller" 01:07:31.052 }' 01:07:31.052 [2024-07-22 11:04:36.125224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.052 [2024-07-22 11:04:36.125256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.052 [2024-07-22 11:04:36.137197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.052 [2024-07-22 11:04:36.137223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.052 [2024-07-22 11:04:36.149180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.052 [2024-07-22 11:04:36.149207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.052 [2024-07-22 11:04:36.157771] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:07:31.052 [2024-07-22 11:04:36.157877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80099 ] 01:07:31.052 [2024-07-22 11:04:36.161177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.052 [2024-07-22 11:04:36.161203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.052 [2024-07-22 11:04:36.173145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.052 [2024-07-22 11:04:36.173171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.052 [2024-07-22 11:04:36.185125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.052 [2024-07-22 11:04:36.185150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.052 [2024-07-22 11:04:36.197138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.052 [2024-07-22 11:04:36.197162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.052 [2024-07-22 11:04:36.209119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.052 [2024-07-22 11:04:36.209145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.052 [2024-07-22 11:04:36.221104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.052 [2024-07-22 11:04:36.221130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.052 [2024-07-22 11:04:36.237092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.052 [2024-07-22 11:04:36.237118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.052 [2024-07-22 11:04:36.253059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.052 [2024-07-22 11:04:36.253086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.312 [2024-07-22 11:04:36.269057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.312 [2024-07-22 11:04:36.269088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.312 [2024-07-22 11:04:36.285015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.312 [2024-07-22 11:04:36.285044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.312 [2024-07-22 11:04:36.300994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.312 [2024-07-22 11:04:36.301036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.312 [2024-07-22 11:04:36.302741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:31.312 [2024-07-22 11:04:36.316963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.312 [2024-07-22 11:04:36.316989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.312 [2024-07-22 11:04:36.332978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.312 [2024-07-22 11:04:36.333006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.312 [2024-07-22 11:04:36.348938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.312 [2024-07-22 11:04:36.348971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.312 [2024-07-22 11:04:36.355843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:07:31.312 [2024-07-22 11:04:36.364918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.312 [2024-07-22 11:04:36.364945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.312 [2024-07-22 11:04:36.380893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.312 [2024-07-22 11:04:36.380920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.312 [2024-07-22 11:04:36.396873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.312 [2024-07-22 11:04:36.396903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.312 [2024-07-22 11:04:36.410150] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:07:31.312 [2024-07-22 11:04:36.412848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.312 [2024-07-22 11:04:36.412881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.312 [2024-07-22 11:04:36.428821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.312 [2024-07-22 11:04:36.428862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.312 [2024-07-22 11:04:36.444828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.312 [2024-07-22 11:04:36.444875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.312 [2024-07-22 11:04:36.460856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.312 [2024-07-22 11:04:36.460902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.312 [2024-07-22 11:04:36.476804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.312 [2024-07-22 11:04:36.476838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.312 [2024-07-22 11:04:36.492793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.312 [2024-07-22 11:04:36.492827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.312 [2024-07-22 11:04:36.508787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.312 [2024-07-22 11:04:36.508821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.571 Running I/O for 5 seconds... 01:07:31.571 [2024-07-22 11:04:36.527462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.571 [2024-07-22 11:04:36.527499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.571 [2024-07-22 11:04:36.555670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.571 [2024-07-22 11:04:36.555786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.571 [2024-07-22 11:04:36.576059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.571 [2024-07-22 11:04:36.576131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.571 [2024-07-22 11:04:36.595673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.571 [2024-07-22 11:04:36.595737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.571 [2024-07-22 11:04:36.614222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.571 [2024-07-22 11:04:36.614287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.571 [2024-07-22 11:04:36.632540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.571 [2024-07-22 11:04:36.632617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.571 [2024-07-22 11:04:36.651119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.571 [2024-07-22 11:04:36.651182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.571 [2024-07-22 11:04:36.669844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.571 [2024-07-22 11:04:36.669917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.571 [2024-07-22 11:04:36.688525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.571 [2024-07-22 11:04:36.688599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.571 [2024-07-22 11:04:36.705206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.571 [2024-07-22 11:04:36.705269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.571 [2024-07-22 11:04:36.725220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.571 [2024-07-22 11:04:36.725295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.571 [2024-07-22 11:04:36.744103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.571 [2024-07-22 11:04:36.744165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.571 [2024-07-22 11:04:36.761294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.571 [2024-07-22 11:04:36.761359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.830 [2024-07-22 11:04:36.781015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.830 [2024-07-22 11:04:36.781087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.830 [2024-07-22 11:04:36.800055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.830 [2024-07-22 11:04:36.800114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.830 [2024-07-22 11:04:36.815882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.830 [2024-07-22 11:04:36.815927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.830 [2024-07-22 11:04:36.835432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.830 [2024-07-22 11:04:36.835488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.830 [2024-07-22 11:04:36.854377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.830 [2024-07-22 11:04:36.854431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.830 [2024-07-22 11:04:36.873105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.830 [2024-07-22 11:04:36.873153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.830 [2024-07-22 11:04:36.891956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.830 [2024-07-22 11:04:36.892016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.830 [2024-07-22 11:04:36.911829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.830 [2024-07-22 11:04:36.911931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.830 [2024-07-22 11:04:36.930452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.830 [2024-07-22 11:04:36.930519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.830 [2024-07-22 11:04:36.948955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.831 [2024-07-22 11:04:36.949029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.831 [2024-07-22 11:04:36.967522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.831 [2024-07-22 11:04:36.967584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.831 [2024-07-22 11:04:36.987081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.831 [2024-07-22 11:04:36.987152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.831 [2024-07-22 11:04:37.003472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.831 [2024-07-22 11:04:37.003576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:31.831 [2024-07-22 11:04:37.020300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:31.831 [2024-07-22 11:04:37.020368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.089 [2024-07-22 11:04:37.036966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.089 [2024-07-22 11:04:37.037039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.089 [2024-07-22 11:04:37.057273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.089 [2024-07-22 11:04:37.057343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.089 [2024-07-22 11:04:37.075930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.089 [2024-07-22 11:04:37.076001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.089 [2024-07-22 11:04:37.092775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.089 [2024-07-22 11:04:37.092844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.089 [2024-07-22 11:04:37.112526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.089 [2024-07-22 11:04:37.112600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.089 [2024-07-22 11:04:37.131612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.089 [2024-07-22 11:04:37.131682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.089 [2024-07-22 11:04:37.152274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.089 [2024-07-22 11:04:37.152340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.089 [2024-07-22 11:04:37.169111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.089 [2024-07-22 11:04:37.169176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.089 [2024-07-22 11:04:37.189257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.089 [2024-07-22 11:04:37.189330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.089 [2024-07-22 11:04:37.208169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.090 [2024-07-22 11:04:37.208234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.090 [2024-07-22 11:04:37.224693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.090 [2024-07-22 11:04:37.224758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.090 [2024-07-22 11:04:37.244382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.090 [2024-07-22 11:04:37.244447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.090 [2024-07-22 11:04:37.263552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.090 [2024-07-22 11:04:37.263617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.090 [2024-07-22 11:04:37.280129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.090 [2024-07-22 11:04:37.280200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.348 [2024-07-22 11:04:37.297436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.348 [2024-07-22 11:04:37.297506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.348 [2024-07-22 11:04:37.316892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.348 [2024-07-22 11:04:37.316964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.348 [2024-07-22 11:04:37.336645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.348 [2024-07-22 11:04:37.336715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.348 [2024-07-22 11:04:37.357067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.348 [2024-07-22 11:04:37.357145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.348 [2024-07-22 11:04:37.374070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.348 [2024-07-22 11:04:37.374134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.348 [2024-07-22 11:04:37.393685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.348 [2024-07-22 11:04:37.393756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.348 [2024-07-22 11:04:37.412599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.348 [2024-07-22 11:04:37.412676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.348 [2024-07-22 11:04:37.429130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.348 [2024-07-22 11:04:37.429197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.349 [2024-07-22 11:04:37.445294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.349 [2024-07-22 11:04:37.445366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.349 [2024-07-22 11:04:37.464148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.349 [2024-07-22 11:04:37.464217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.349 [2024-07-22 11:04:37.480400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.349 [2024-07-22 11:04:37.480460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.349 [2024-07-22 11:04:37.499159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.349 [2024-07-22 11:04:37.499224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.349 [2024-07-22 11:04:37.515795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.349 [2024-07-22 11:04:37.515889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.349 [2024-07-22 11:04:37.535705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.349 [2024-07-22 11:04:37.535786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.349 [2024-07-22 11:04:37.552686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.349 [2024-07-22 11:04:37.552766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.608 [2024-07-22 11:04:37.572097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.608 [2024-07-22 11:04:37.572167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.608 [2024-07-22 11:04:37.591078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.608 [2024-07-22 11:04:37.591146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.608 [2024-07-22 11:04:37.608155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.608 [2024-07-22 11:04:37.608221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.608 [2024-07-22 11:04:37.627825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.608 [2024-07-22 11:04:37.627903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.608 [2024-07-22 11:04:37.646845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.608 [2024-07-22 11:04:37.646934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.608 [2024-07-22 11:04:37.666913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.608 [2024-07-22 11:04:37.666995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.608 [2024-07-22 11:04:37.686322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.608 [2024-07-22 11:04:37.686390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.608 [2024-07-22 11:04:37.706212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.608 [2024-07-22 11:04:37.706285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.608 [2024-07-22 11:04:37.723784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.608 [2024-07-22 11:04:37.723867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.608 [2024-07-22 11:04:37.743611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.608 [2024-07-22 11:04:37.743666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.608 [2024-07-22 11:04:37.760527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.608 [2024-07-22 11:04:37.760585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.608 [2024-07-22 11:04:37.780623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.608 [2024-07-22 11:04:37.780700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.608 [2024-07-22 11:04:37.797796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.608 [2024-07-22 11:04:37.797879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.867 [2024-07-22 11:04:37.817401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.867 [2024-07-22 11:04:37.817487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.867 [2024-07-22 11:04:37.833944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.867 [2024-07-22 11:04:37.834012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.867 [2024-07-22 11:04:37.851628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.867 [2024-07-22 11:04:37.851694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.867 [2024-07-22 11:04:37.872229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.867 [2024-07-22 11:04:37.872297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.867 [2024-07-22 11:04:37.892176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.867 [2024-07-22 11:04:37.892236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.867 [2024-07-22 11:04:37.911397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.867 [2024-07-22 11:04:37.911449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.867 [2024-07-22 11:04:37.931115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.867 [2024-07-22 11:04:37.931178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.867 [2024-07-22 11:04:37.947723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.867 [2024-07-22 11:04:37.947787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.867 [2024-07-22 11:04:37.964927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.867 [2024-07-22 11:04:37.964983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.867 [2024-07-22 11:04:37.981666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.867 [2024-07-22 11:04:37.981723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.867 [2024-07-22 11:04:37.998643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.867 [2024-07-22 11:04:37.998712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.867 [2024-07-22 11:04:38.015260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.867 [2024-07-22 11:04:38.015325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.867 [2024-07-22 11:04:38.035599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.867 [2024-07-22 11:04:38.035672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.867 [2024-07-22 11:04:38.046493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.867 [2024-07-22 11:04:38.046563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:32.867 [2024-07-22 11:04:38.061909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:32.867 [2024-07-22 11:04:38.061972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.127 [2024-07-22 11:04:38.078693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.127 [2024-07-22 11:04:38.078768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.127 [2024-07-22 11:04:38.095225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.127 [2024-07-22 11:04:38.095295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.127 [2024-07-22 11:04:38.115100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.127 [2024-07-22 11:04:38.115170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.127 [2024-07-22 11:04:38.129603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.127 [2024-07-22 11:04:38.129676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.127 [2024-07-22 11:04:38.141207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.127 [2024-07-22 11:04:38.141267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.127 [2024-07-22 11:04:38.156546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.127 [2024-07-22 11:04:38.156608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.127 [2024-07-22 11:04:38.172528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.127 [2024-07-22 11:04:38.172590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.127 [2024-07-22 11:04:38.188681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.127 [2024-07-22 11:04:38.188765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.127 [2024-07-22 11:04:38.199387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.127 [2024-07-22 11:04:38.199447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.127 [2024-07-22 11:04:38.215361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.127 [2024-07-22 11:04:38.215428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.127 [2024-07-22 11:04:38.232371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.127 [2024-07-22 11:04:38.232433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.127 [2024-07-22 11:04:38.248663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.127 [2024-07-22 11:04:38.248727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.127 [2024-07-22 11:04:38.265636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.127 [2024-07-22 11:04:38.265697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.127 [2024-07-22 11:04:38.281310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.127 [2024-07-22 11:04:38.281373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.127 [2024-07-22 11:04:38.295972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.127 [2024-07-22 11:04:38.296031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.127 [2024-07-22 11:04:38.311890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.127 [2024-07-22 11:04:38.311950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.127 [2024-07-22 11:04:38.327581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.127 [2024-07-22 11:04:38.327644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.386 [2024-07-22 11:04:38.338987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.386 [2024-07-22 11:04:38.339059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.386 [2024-07-22 11:04:38.354449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.386 [2024-07-22 11:04:38.354511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.386 [2024-07-22 11:04:38.371268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.386 [2024-07-22 11:04:38.371329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.386 [2024-07-22 11:04:38.387383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.386 [2024-07-22 11:04:38.387446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.386 [2024-07-22 11:04:38.404278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.386 [2024-07-22 11:04:38.404345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.386 [2024-07-22 11:04:38.420385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.386 [2024-07-22 11:04:38.420456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.386 [2024-07-22 11:04:38.437430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.386 [2024-07-22 11:04:38.437489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.386 [2024-07-22 11:04:38.453955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.386 [2024-07-22 11:04:38.454021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.386 [2024-07-22 11:04:38.471127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.386 [2024-07-22 11:04:38.471191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.386 [2024-07-22 11:04:38.488155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.386 [2024-07-22 11:04:38.488222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.386 [2024-07-22 11:04:38.505621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.386 [2024-07-22 11:04:38.505687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.386 [2024-07-22 11:04:38.523311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.386 [2024-07-22 11:04:38.523374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.386 [2024-07-22 11:04:38.539341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.386 [2024-07-22 11:04:38.539406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.386 [2024-07-22 11:04:38.556262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.386 [2024-07-22 11:04:38.556327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.386 [2024-07-22 11:04:38.573225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.386 [2024-07-22 11:04:38.573287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.386 [2024-07-22 11:04:38.590252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.386 [2024-07-22 11:04:38.590317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.646 [2024-07-22 11:04:38.605547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.646 [2024-07-22 11:04:38.605617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.646 [2024-07-22 11:04:38.616297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.646 [2024-07-22 11:04:38.616365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.646 [2024-07-22 11:04:38.632109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.646 [2024-07-22 11:04:38.632172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.646 [2024-07-22 11:04:38.648822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.646 [2024-07-22 11:04:38.648900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.646 [2024-07-22 11:04:38.665065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.646 [2024-07-22 11:04:38.665124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.646 [2024-07-22 11:04:38.679242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.646 [2024-07-22 11:04:38.679307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.646 [2024-07-22 11:04:38.694360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.646 [2024-07-22 11:04:38.694451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.646 [2024-07-22 11:04:38.706525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.646 [2024-07-22 11:04:38.706599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.646 [2024-07-22 11:04:38.721897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.646 [2024-07-22 11:04:38.721992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.646 [2024-07-22 11:04:38.739235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.646 [2024-07-22 11:04:38.739302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.646 [2024-07-22 11:04:38.755692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.646 [2024-07-22 11:04:38.755756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.646 [2024-07-22 11:04:38.772200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.646 [2024-07-22 11:04:38.772268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.646 [2024-07-22 11:04:38.783352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.646 [2024-07-22 11:04:38.783408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.646 [2024-07-22 11:04:38.799631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.646 [2024-07-22 11:04:38.799686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.646 [2024-07-22 11:04:38.816527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.646 [2024-07-22 11:04:38.816593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.646 [2024-07-22 11:04:38.833175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.646 [2024-07-22 11:04:38.833242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.646 [2024-07-22 11:04:38.850983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.646 [2024-07-22 11:04:38.851054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.905 [2024-07-22 11:04:38.866650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.905 [2024-07-22 11:04:38.866723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.905 [2024-07-22 11:04:38.877917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.905 [2024-07-22 11:04:38.877991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.905 [2024-07-22 11:04:38.893001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.905 [2024-07-22 11:04:38.893062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.905 [2024-07-22 11:04:38.910545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.905 [2024-07-22 11:04:38.910606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.905 [2024-07-22 11:04:38.927045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.905 [2024-07-22 11:04:38.927114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.905 [2024-07-22 11:04:38.944328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.905 [2024-07-22 11:04:38.944390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.905 [2024-07-22 11:04:38.958770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.905 [2024-07-22 11:04:38.958874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.905 [2024-07-22 11:04:38.975435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.905 [2024-07-22 11:04:38.975507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.905 [2024-07-22 11:04:38.995295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.905 [2024-07-22 11:04:38.995356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.905 [2024-07-22 11:04:39.016073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.905 [2024-07-22 11:04:39.016130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.905 [2024-07-22 11:04:39.027032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.905 [2024-07-22 11:04:39.027089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.905 [2024-07-22 11:04:39.042810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.905 [2024-07-22 11:04:39.042878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.905 [2024-07-22 11:04:39.063351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.905 [2024-07-22 11:04:39.063405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.905 [2024-07-22 11:04:39.084038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.905 [2024-07-22 11:04:39.084093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:33.905 [2024-07-22 11:04:39.101526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:33.905 [2024-07-22 11:04:39.101589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.164 [2024-07-22 11:04:39.121750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.164 [2024-07-22 11:04:39.121824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.164 [2024-07-22 11:04:39.138989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.164 [2024-07-22 11:04:39.139055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.164 [2024-07-22 11:04:39.159306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.164 [2024-07-22 11:04:39.159370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.164 [2024-07-22 11:04:39.179186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.164 [2024-07-22 11:04:39.179256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.164 [2024-07-22 11:04:39.196369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.164 [2024-07-22 11:04:39.196436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.164 [2024-07-22 11:04:39.216990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.164 [2024-07-22 11:04:39.217063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.164 [2024-07-22 11:04:39.236664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.164 [2024-07-22 11:04:39.236733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.164 [2024-07-22 11:04:39.256095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.164 [2024-07-22 11:04:39.256168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.164 [2024-07-22 11:04:39.276467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.164 [2024-07-22 11:04:39.276535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.164 [2024-07-22 11:04:39.296375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.164 [2024-07-22 11:04:39.296441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.164 [2024-07-22 11:04:39.315966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.164 [2024-07-22 11:04:39.316029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.164 [2024-07-22 11:04:39.336255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.164 [2024-07-22 11:04:39.336323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.164 [2024-07-22 11:04:39.355436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.164 [2024-07-22 11:04:39.355502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.423 [2024-07-22 11:04:39.375986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.423 [2024-07-22 11:04:39.376050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.423 [2024-07-22 11:04:39.395703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.423 [2024-07-22 11:04:39.395768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.423 [2024-07-22 11:04:39.415047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.423 [2024-07-22 11:04:39.415114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.423 [2024-07-22 11:04:39.432666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.423 [2024-07-22 11:04:39.432747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.423 [2024-07-22 11:04:39.452776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.423 [2024-07-22 11:04:39.452862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.423 [2024-07-22 11:04:39.470177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.423 [2024-07-22 11:04:39.470246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.423 [2024-07-22 11:04:39.490885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.423 [2024-07-22 11:04:39.490950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.423 [2024-07-22 11:04:39.511419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.423 [2024-07-22 11:04:39.511489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.423 [2024-07-22 11:04:39.530834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.423 [2024-07-22 11:04:39.530927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.423 [2024-07-22 11:04:39.551282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.423 [2024-07-22 11:04:39.551353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.423 [2024-07-22 11:04:39.569709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.423 [2024-07-22 11:04:39.569781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.423 [2024-07-22 11:04:39.589973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.423 [2024-07-22 11:04:39.590039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.423 [2024-07-22 11:04:39.610435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.423 [2024-07-22 11:04:39.610515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.681 [2024-07-22 11:04:39.631360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.681 [2024-07-22 11:04:39.631433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.681 [2024-07-22 11:04:39.651710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.681 [2024-07-22 11:04:39.651784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.681 [2024-07-22 11:04:39.670520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.681 [2024-07-22 11:04:39.670591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.681 [2024-07-22 11:04:39.690729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.681 [2024-07-22 11:04:39.690796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.682 [2024-07-22 11:04:39.710106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.682 [2024-07-22 11:04:39.710174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.682 [2024-07-22 11:04:39.730743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.682 [2024-07-22 11:04:39.730814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.682 [2024-07-22 11:04:39.749812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.682 [2024-07-22 11:04:39.749893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.682 [2024-07-22 11:04:39.770127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.682 [2024-07-22 11:04:39.770195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.682 [2024-07-22 11:04:39.789146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.682 [2024-07-22 11:04:39.789210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.682 [2024-07-22 11:04:39.809031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.682 [2024-07-22 11:04:39.809097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.682 [2024-07-22 11:04:39.826889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.682 [2024-07-22 11:04:39.826946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.682 [2024-07-22 11:04:39.847200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.682 [2024-07-22 11:04:39.847284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.682 [2024-07-22 11:04:39.864207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.682 [2024-07-22 11:04:39.864277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.682 [2024-07-22 11:04:39.884472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.682 [2024-07-22 11:04:39.884566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.940 [2024-07-22 11:04:39.904072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.940 [2024-07-22 11:04:39.904140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.940 [2024-07-22 11:04:39.921652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.940 [2024-07-22 11:04:39.921714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.940 [2024-07-22 11:04:39.940911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.940 [2024-07-22 11:04:39.940972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.940 [2024-07-22 11:04:39.957952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.940 [2024-07-22 11:04:39.958007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.940 [2024-07-22 11:04:39.977811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.940 [2024-07-22 11:04:39.977901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.940 [2024-07-22 11:04:39.995491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.940 [2024-07-22 11:04:39.995573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.940 [2024-07-22 11:04:40.015532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.940 [2024-07-22 11:04:40.015598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.940 [2024-07-22 11:04:40.032732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.940 [2024-07-22 11:04:40.032794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.940 [2024-07-22 11:04:40.052627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.940 [2024-07-22 11:04:40.052684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.940 [2024-07-22 11:04:40.073035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.940 [2024-07-22 11:04:40.073100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.940 [2024-07-22 11:04:40.093597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.940 [2024-07-22 11:04:40.093651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.940 [2024-07-22 11:04:40.114065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.940 [2024-07-22 11:04:40.114113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:34.940 [2024-07-22 11:04:40.131343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:34.940 [2024-07-22 11:04:40.131396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.199 [2024-07-22 11:04:40.151768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.199 [2024-07-22 11:04:40.151823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.199 [2024-07-22 11:04:40.171873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.199 [2024-07-22 11:04:40.171933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.199 [2024-07-22 11:04:40.192588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.199 [2024-07-22 11:04:40.192671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.199 [2024-07-22 11:04:40.210111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.199 [2024-07-22 11:04:40.210171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.199 [2024-07-22 11:04:40.230614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.199 [2024-07-22 11:04:40.230678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.199 [2024-07-22 11:04:40.251345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.199 [2024-07-22 11:04:40.251408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.199 [2024-07-22 11:04:40.272410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.199 [2024-07-22 11:04:40.272473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.199 [2024-07-22 11:04:40.292893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.199 [2024-07-22 11:04:40.292980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.199 [2024-07-22 11:04:40.311019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.199 [2024-07-22 11:04:40.311088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.199 [2024-07-22 11:04:40.331122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.199 [2024-07-22 11:04:40.331185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.199 [2024-07-22 11:04:40.351919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.199 [2024-07-22 11:04:40.351989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.199 [2024-07-22 11:04:40.372635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.199 [2024-07-22 11:04:40.372702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.199 [2024-07-22 11:04:40.393176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.199 [2024-07-22 11:04:40.393240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.458 [2024-07-22 11:04:40.412606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.458 [2024-07-22 11:04:40.412673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.458 [2024-07-22 11:04:40.432819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.458 [2024-07-22 11:04:40.432899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.458 [2024-07-22 11:04:40.449887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.458 [2024-07-22 11:04:40.449962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.458 [2024-07-22 11:04:40.470057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.458 [2024-07-22 11:04:40.470120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.458 [2024-07-22 11:04:40.490262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.458 [2024-07-22 11:04:40.490328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.458 [2024-07-22 11:04:40.508171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.458 [2024-07-22 11:04:40.508229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.458 [2024-07-22 11:04:40.528241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.458 [2024-07-22 11:04:40.528303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.458 [2024-07-22 11:04:40.549358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.458 [2024-07-22 11:04:40.549423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.458 [2024-07-22 11:04:40.570136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.458 [2024-07-22 11:04:40.570195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.458 [2024-07-22 11:04:40.589788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.458 [2024-07-22 11:04:40.589861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.458 [2024-07-22 11:04:40.610699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.458 [2024-07-22 11:04:40.610758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.458 [2024-07-22 11:04:40.631029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.458 [2024-07-22 11:04:40.631089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.458 [2024-07-22 11:04:40.651501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.458 [2024-07-22 11:04:40.651575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.716 [2024-07-22 11:04:40.672288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.717 [2024-07-22 11:04:40.672361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.717 [2024-07-22 11:04:40.689255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.717 [2024-07-22 11:04:40.689315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.717 [2024-07-22 11:04:40.708955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.717 [2024-07-22 11:04:40.709022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.717 [2024-07-22 11:04:40.725410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.717 [2024-07-22 11:04:40.725479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.717 [2024-07-22 11:04:40.746012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.717 [2024-07-22 11:04:40.746084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.717 [2024-07-22 11:04:40.764586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.717 [2024-07-22 11:04:40.764651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.717 [2024-07-22 11:04:40.781772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.717 [2024-07-22 11:04:40.781832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.717 [2024-07-22 11:04:40.801665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.717 [2024-07-22 11:04:40.801734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.717 [2024-07-22 11:04:40.821797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.717 [2024-07-22 11:04:40.821882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.717 [2024-07-22 11:04:40.842403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.717 [2024-07-22 11:04:40.842461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.717 [2024-07-22 11:04:40.862401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.717 [2024-07-22 11:04:40.862461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.717 [2024-07-22 11:04:40.879377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.717 [2024-07-22 11:04:40.879441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.717 [2024-07-22 11:04:40.899783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.717 [2024-07-22 11:04:40.899866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.717 [2024-07-22 11:04:40.919565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.717 [2024-07-22 11:04:40.919638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.974 [2024-07-22 11:04:40.939623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.974 [2024-07-22 11:04:40.939684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.974 [2024-07-22 11:04:40.959013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.974 [2024-07-22 11:04:40.959070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.974 [2024-07-22 11:04:40.977318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.974 [2024-07-22 11:04:40.977378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.974 [2024-07-22 11:04:40.995753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.974 [2024-07-22 11:04:40.995814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.974 [2024-07-22 11:04:41.014183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.974 [2024-07-22 11:04:41.014243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.974 [2024-07-22 11:04:41.032476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.974 [2024-07-22 11:04:41.032544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.974 [2024-07-22 11:04:41.051067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.974 [2024-07-22 11:04:41.051131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.974 [2024-07-22 11:04:41.070605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.974 [2024-07-22 11:04:41.070670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.974 [2024-07-22 11:04:41.089412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.974 [2024-07-22 11:04:41.089479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.974 [2024-07-22 11:04:41.107897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.974 [2024-07-22 11:04:41.107958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.974 [2024-07-22 11:04:41.123367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.974 [2024-07-22 11:04:41.123424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.974 [2024-07-22 11:04:41.143399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.974 [2024-07-22 11:04:41.143470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:35.974 [2024-07-22 11:04:41.162074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:35.974 [2024-07-22 11:04:41.162136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:36.232 [2024-07-22 11:04:41.182419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:36.232 [2024-07-22 11:04:41.182484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:36.232 [2024-07-22 11:04:41.201575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:36.232 [2024-07-22 11:04:41.201644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:36.232 [2024-07-22 11:04:41.217980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:36.232 [2024-07-22 11:04:41.218032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:36.232 [2024-07-22 11:04:41.238106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:36.232 [2024-07-22 11:04:41.238164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:36.232 [2024-07-22 11:04:41.256965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:36.232 [2024-07-22 11:04:41.257026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:36.232 [2024-07-22 11:04:41.274163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:36.232 [2024-07-22 11:04:41.274221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:36.233 [2024-07-22 11:04:41.293725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:36.233 [2024-07-22 11:04:41.293797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:36.233 [2024-07-22 11:04:41.312467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:36.233 [2024-07-22 11:04:41.312535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:36.233 [2024-07-22 11:04:41.332091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:36.233 [2024-07-22 11:04:41.332157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:36.233 [2024-07-22 11:04:41.350787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:36.233 [2024-07-22 11:04:41.350871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:36.233 [2024-07-22 11:04:41.367531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:36.233 [2024-07-22 11:04:41.367593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:36.233 [2024-07-22 11:04:41.387219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:36.233 [2024-07-22 11:04:41.387284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:36.233 [2024-07-22 11:04:41.405558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:36.233 [2024-07-22 11:04:41.405624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:36.233 [2024-07-22 11:04:41.424766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:36.233 [2024-07-22 11:04:41.424843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:36.491 [2024-07-22 11:04:41.441691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:36.491 [2024-07-22 11:04:41.441756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:36.491 [2024-07-22 11:04:41.461619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:36.491 [2024-07-22 11:04:41.461692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:36.491 [2024-07-22 11:04:41.477787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:36.491 [2024-07-22 11:04:41.477870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:36.491 [2024-07-22 11:04:41.493744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:36.491 [2024-07-22 11:04:41.493809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:36.491 [2024-07-22 11:04:41.511905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:36.491 [2024-07-22 11:04:41.511967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:36.491 01:07:36.492 Latency(us) 01:07:36.492 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:07:36.492 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 01:07:36.492 Nvme1n1 : 5.01 13981.46 109.23 0.00 0.00 9146.56 3790.03 28425.25 01:07:36.492 =================================================================================================================== 01:07:36.492 Total : 13981.46 109.23 0.00 0.00 9146.56 3790.03 28425.25 01:07:36.492 [2024-07-22 11:04:41.526162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:36.492 [2024-07-22 11:04:41.526219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:36.492 [2024-07-22 11:04:41.542143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:36.492 [2024-07-22 11:04:41.542203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:36.492 [2024-07-22 11:04:41.558100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:36.492 [2024-07-22 11:04:41.558146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:36.492 [2024-07-22 11:04:41.574080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:36.492 [2024-07-22 11:04:41.574136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:36.492 [2024-07-22 11:04:41.590060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:36.492 [2024-07-22 11:04:41.590116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:36.492 [2024-07-22 11:04:41.606019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:36.492 [2024-07-22 11:04:41.606067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:36.492 [2024-07-22 11:04:41.622002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:36.492 [2024-07-22 11:04:41.622045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:36.492 [2024-07-22 11:04:41.637983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:36.492 [2024-07-22 11:04:41.638018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:36.492 [2024-07-22 11:04:41.653957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:36.492 [2024-07-22 11:04:41.653998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:36.492 [2024-07-22 11:04:41.669951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:36.492 [2024-07-22 11:04:41.669994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:36.492 [2024-07-22 11:04:41.685984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:36.492 [2024-07-22 11:04:41.686043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:36.750 [2024-07-22 11:04:41.702007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:07:36.750 [2024-07-22 11:04:41.702068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:36.750 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (80099) - No such process 01:07:36.750 11:04:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 80099 01:07:36.750 11:04:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:36.750 11:04:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:36.750 11:04:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:07:36.750 11:04:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:36.750 11:04:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 01:07:36.750 11:04:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:36.750 11:04:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:07:36.750 delay0 01:07:36.750 11:04:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:36.750 11:04:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 01:07:36.750 11:04:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:36.750 11:04:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:07:36.750 11:04:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:36.750 11:04:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 01:07:36.750 [2024-07-22 11:04:41.927006] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 01:07:43.323 Initializing NVMe Controllers 01:07:43.323 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:07:43.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:07:43.323 Initialization complete. Launching workers. 01:07:43.323 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 691 01:07:43.323 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 978, failed to submit 33 01:07:43.323 success 851, unsuccess 127, failed 0 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:07:43.323 rmmod nvme_tcp 01:07:43.323 rmmod nvme_fabrics 01:07:43.323 rmmod nvme_keyring 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 79948 ']' 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 79948 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 79948 ']' 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 79948 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79948 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:07:43.323 killing process with pid 79948 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79948' 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 79948 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 79948 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:07:43.323 01:07:43.323 real 0m24.773s 01:07:43.323 user 0m38.609s 01:07:43.323 sys 0m8.977s 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 01:07:43.323 11:04:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:07:43.323 ************************************ 01:07:43.323 END TEST nvmf_zcopy 01:07:43.323 ************************************ 01:07:43.581 11:04:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:07:43.581 11:04:48 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 01:07:43.581 11:04:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:07:43.581 11:04:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:07:43.581 11:04:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:07:43.581 ************************************ 01:07:43.581 START TEST nvmf_nmic 01:07:43.581 ************************************ 01:07:43.581 11:04:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 01:07:43.581 * Looking for test storage... 01:07:43.581 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:07:43.581 11:04:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:07:43.581 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 01:07:43.581 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:07:43.581 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:07:43.581 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:07:43.581 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:07:43.581 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:07:43.581 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:07:43.581 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:07:43.581 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:07:43.581 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:07:43.581 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:07:43.581 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:07:43.581 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:07:43.581 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:07:43.581 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:07:43.581 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:07:43.581 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:07:43.581 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:07:43.581 11:04:48 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:07:43.581 11:04:48 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:07:43.582 Cannot find device "nvmf_tgt_br" 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 01:07:43.582 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:07:43.840 Cannot find device "nvmf_tgt_br2" 01:07:43.840 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 01:07:43.840 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:07:43.840 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:07:43.840 Cannot find device "nvmf_tgt_br" 01:07:43.840 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 01:07:43.840 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:07:43.840 Cannot find device "nvmf_tgt_br2" 01:07:43.840 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 01:07:43.840 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:07:43.840 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:07:43.840 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:07:43.840 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:07:43.840 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 01:07:43.840 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:07:43.840 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:07:43.840 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 01:07:43.840 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:07:43.840 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:07:43.840 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:07:43.840 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:07:43.840 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:07:43.840 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:07:43.840 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:07:43.840 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:07:43.840 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:07:43.840 11:04:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:07:43.840 11:04:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:07:43.840 11:04:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:07:43.840 11:04:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:07:43.840 11:04:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:07:43.840 11:04:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:07:43.840 11:04:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:07:43.840 11:04:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:07:43.840 11:04:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:07:43.840 11:04:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:07:44.098 11:04:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:07:44.098 11:04:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:07:44.098 11:04:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:07:44.098 11:04:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:07:44.098 11:04:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:07:44.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:07:44.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 01:07:44.098 01:07:44.098 --- 10.0.0.2 ping statistics --- 01:07:44.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:44.098 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 01:07:44.098 11:04:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:07:44.098 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:07:44.098 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 01:07:44.098 01:07:44.098 --- 10.0.0.3 ping statistics --- 01:07:44.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:44.098 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 01:07:44.098 11:04:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:07:44.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:07:44.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 01:07:44.098 01:07:44.098 --- 10.0.0.1 ping statistics --- 01:07:44.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:44.098 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 01:07:44.098 11:04:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:07:44.098 11:04:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 01:07:44.098 11:04:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:07:44.098 11:04:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:07:44.098 11:04:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:07:44.098 11:04:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:07:44.098 11:04:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:07:44.098 11:04:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:07:44.098 11:04:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:07:44.098 11:04:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 01:07:44.098 11:04:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:07:44.098 11:04:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 01:07:44.098 11:04:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:07:44.098 11:04:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=80423 01:07:44.098 11:04:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 80423 01:07:44.098 11:04:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:07:44.098 11:04:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 80423 ']' 01:07:44.098 11:04:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:07:44.098 11:04:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 01:07:44.098 11:04:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:07:44.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:07:44.098 11:04:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 01:07:44.098 11:04:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:07:44.098 [2024-07-22 11:04:49.167669] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:07:44.098 [2024-07-22 11:04:49.167746] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:07:44.357 [2024-07-22 11:04:49.311213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:07:44.357 [2024-07-22 11:04:49.359549] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:07:44.357 [2024-07-22 11:04:49.359603] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:07:44.357 [2024-07-22 11:04:49.359613] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:07:44.357 [2024-07-22 11:04:49.359621] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:07:44.357 [2024-07-22 11:04:49.359628] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:07:44.357 [2024-07-22 11:04:49.359759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:07:44.357 [2024-07-22 11:04:49.359949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:07:44.357 [2024-07-22 11:04:49.360809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:07:44.357 [2024-07-22 11:04:49.360812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:07:44.357 [2024-07-22 11:04:49.401758] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:07:44.919 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:07:44.919 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 01:07:44.919 11:04:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:07:44.919 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 01:07:44.919 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:07:44.919 11:04:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:07:44.919 11:04:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:07:44.919 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:44.919 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:07:44.919 [2024-07-22 11:04:50.091136] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:07:44.919 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:44.919 11:04:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:07:44.919 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:44.919 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:07:45.175 Malloc0 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:07:45.176 [2024-07-22 11:04:50.152458] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 01:07:45.176 test case1: single bdev can't be used in multiple subsystems 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:07:45.176 [2024-07-22 11:04:50.176282] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 01:07:45.176 [2024-07-22 11:04:50.176312] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 01:07:45.176 [2024-07-22 11:04:50.176323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:07:45.176 request: 01:07:45.176 { 01:07:45.176 "nqn": "nqn.2016-06.io.spdk:cnode2", 01:07:45.176 "namespace": { 01:07:45.176 "bdev_name": "Malloc0", 01:07:45.176 "no_auto_visible": false 01:07:45.176 }, 01:07:45.176 "method": "nvmf_subsystem_add_ns", 01:07:45.176 "req_id": 1 01:07:45.176 } 01:07:45.176 Got JSON-RPC error response 01:07:45.176 response: 01:07:45.176 { 01:07:45.176 "code": -32602, 01:07:45.176 "message": "Invalid parameters" 01:07:45.176 } 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 01:07:45.176 Adding namespace failed - expected result. 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 01:07:45.176 test case2: host connect to nvmf target in multiple paths 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:07:45.176 [2024-07-22 11:04:50.192378] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid=7758934d-ca6b-403e-9e5d-3518ecb16acb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 01:07:45.176 11:04:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid=7758934d-ca6b-403e-9e5d-3518ecb16acb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 01:07:45.433 11:04:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 01:07:45.433 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 01:07:45.433 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:07:45.433 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:07:45.433 11:04:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 01:07:47.330 11:04:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:07:47.330 11:04:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 01:07:47.330 11:04:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:07:47.330 11:04:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:07:47.330 11:04:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:07:47.330 11:04:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 01:07:47.330 11:04:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 01:07:47.330 [global] 01:07:47.330 thread=1 01:07:47.330 invalidate=1 01:07:47.330 rw=write 01:07:47.330 time_based=1 01:07:47.330 runtime=1 01:07:47.330 ioengine=libaio 01:07:47.330 direct=1 01:07:47.330 bs=4096 01:07:47.330 iodepth=1 01:07:47.330 norandommap=0 01:07:47.330 numjobs=1 01:07:47.330 01:07:47.330 verify_dump=1 01:07:47.330 verify_backlog=512 01:07:47.330 verify_state_save=0 01:07:47.330 do_verify=1 01:07:47.330 verify=crc32c-intel 01:07:47.330 [job0] 01:07:47.330 filename=/dev/nvme0n1 01:07:47.330 Could not set queue depth (nvme0n1) 01:07:47.588 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:07:47.588 fio-3.35 01:07:47.588 Starting 1 thread 01:07:48.966 01:07:48.966 job0: (groupid=0, jobs=1): err= 0: pid=80509: Mon Jul 22 11:04:53 2024 01:07:48.966 read: IOPS=4096, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1000msec) 01:07:48.966 slat (nsec): min=7433, max=33917, avg=8306.56, stdev=1538.35 01:07:48.966 clat (usec): min=100, max=690, avg=135.11, stdev=16.13 01:07:48.966 lat (usec): min=109, max=697, avg=143.42, stdev=16.18 01:07:48.966 clat percentiles (usec): 01:07:48.966 | 1.00th=[ 108], 5.00th=[ 113], 10.00th=[ 117], 20.00th=[ 123], 01:07:48.966 | 30.00th=[ 128], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 141], 01:07:48.966 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 151], 95.00th=[ 157], 01:07:48.966 | 99.00th=[ 167], 99.50th=[ 172], 99.90th=[ 184], 99.95th=[ 186], 01:07:48.966 | 99.99th=[ 693] 01:07:48.966 write: IOPS=4266, BW=16.7MiB/s (17.5MB/s)(16.7MiB/1000msec); 0 zone resets 01:07:48.966 slat (usec): min=11, max=113, avg=13.73, stdev= 5.78 01:07:48.966 clat (usec): min=59, max=515, avg=81.06, stdev=12.22 01:07:48.966 lat (usec): min=71, max=534, avg=94.80, stdev=14.53 01:07:48.966 clat percentiles (usec): 01:07:48.966 | 1.00th=[ 64], 5.00th=[ 67], 10.00th=[ 69], 20.00th=[ 73], 01:07:48.966 | 30.00th=[ 76], 40.00th=[ 79], 50.00th=[ 81], 60.00th=[ 83], 01:07:48.966 | 70.00th=[ 86], 80.00th=[ 89], 90.00th=[ 93], 95.00th=[ 98], 01:07:48.966 | 99.00th=[ 113], 99.50th=[ 118], 99.90th=[ 127], 99.95th=[ 169], 01:07:48.966 | 99.99th=[ 515] 01:07:48.966 bw ( KiB/s): min=16680, max=16680, per=97.75%, avg=16680.00, stdev= 0.00, samples=1 01:07:48.966 iops : min= 4170, max= 4170, avg=4170.00, stdev= 0.00, samples=1 01:07:48.966 lat (usec) : 100=49.08%, 250=50.90%, 750=0.02% 01:07:48.966 cpu : usr=2.10%, sys=7.20%, ctx=8364, majf=0, minf=2 01:07:48.966 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:07:48.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:07:48.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:07:48.966 issued rwts: total=4096,4266,0,0 short=0,0,0,0 dropped=0,0,0,0 01:07:48.966 latency : target=0, window=0, percentile=100.00%, depth=1 01:07:48.966 01:07:48.966 Run status group 0 (all jobs): 01:07:48.966 READ: bw=16.0MiB/s (16.8MB/s), 16.0MiB/s-16.0MiB/s (16.8MB/s-16.8MB/s), io=16.0MiB (16.8MB), run=1000-1000msec 01:07:48.966 WRITE: bw=16.7MiB/s (17.5MB/s), 16.7MiB/s-16.7MiB/s (17.5MB/s-17.5MB/s), io=16.7MiB (17.5MB), run=1000-1000msec 01:07:48.966 01:07:48.966 Disk stats (read/write): 01:07:48.966 nvme0n1: ios=3634/3967, merge=0/0, ticks=508/337, in_queue=845, util=91.28% 01:07:48.966 11:04:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:07:48.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 01:07:48.966 11:04:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:07:48.966 11:04:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 01:07:48.966 11:04:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:07:48.966 11:04:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 01:07:48.966 11:04:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:07:48.966 11:04:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 01:07:48.966 11:04:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 01:07:48.966 11:04:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 01:07:48.966 11:04:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 01:07:48.966 11:04:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 01:07:48.966 11:04:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 01:07:48.966 11:04:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:07:48.966 11:04:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 01:07:48.966 11:04:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 01:07:48.966 11:04:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:07:48.966 rmmod nvme_tcp 01:07:48.966 rmmod nvme_fabrics 01:07:48.966 rmmod nvme_keyring 01:07:48.966 11:04:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:07:48.966 11:04:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 01:07:48.966 11:04:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 01:07:48.966 11:04:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 80423 ']' 01:07:48.966 11:04:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 80423 01:07:48.966 11:04:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 80423 ']' 01:07:48.966 11:04:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 80423 01:07:48.966 11:04:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 01:07:48.966 11:04:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:07:48.966 11:04:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80423 01:07:48.966 11:04:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:07:48.966 11:04:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:07:48.966 11:04:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80423' 01:07:48.966 killing process with pid 80423 01:07:48.966 11:04:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 80423 01:07:48.966 11:04:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 80423 01:07:49.225 11:04:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:07:49.225 11:04:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:07:49.225 11:04:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:07:49.225 11:04:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:07:49.225 11:04:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 01:07:49.225 11:04:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:07:49.225 11:04:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:07:49.225 11:04:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:07:49.225 11:04:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:07:49.225 01:07:49.225 real 0m5.666s 01:07:49.225 user 0m17.786s 01:07:49.225 sys 0m2.403s 01:07:49.225 11:04:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 01:07:49.225 11:04:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:07:49.225 ************************************ 01:07:49.225 END TEST nvmf_nmic 01:07:49.225 ************************************ 01:07:49.225 11:04:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:07:49.225 11:04:54 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 01:07:49.225 11:04:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:07:49.225 11:04:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:07:49.225 11:04:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:07:49.225 ************************************ 01:07:49.225 START TEST nvmf_fio_target 01:07:49.225 ************************************ 01:07:49.225 11:04:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 01:07:49.225 * Looking for test storage... 01:07:49.225 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:07:49.225 11:04:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:07:49.225 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 01:07:49.225 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:07:49.225 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:07:49.225 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:07:49.225 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:07:49.225 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:07:49.225 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:07:49.225 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:07:49.225 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:07:49.225 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:07:49.225 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:07:49.225 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:07:49.225 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:07:49.225 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:07:49.225 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:07:49.485 Cannot find device "nvmf_tgt_br" 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:07:49.485 Cannot find device "nvmf_tgt_br2" 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:07:49.485 Cannot find device "nvmf_tgt_br" 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:07:49.485 Cannot find device "nvmf_tgt_br2" 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:07:49.485 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:07:49.485 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:07:49.485 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:07:49.745 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:07:49.745 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 01:07:49.745 01:07:49.745 --- 10.0.0.2 ping statistics --- 01:07:49.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:49.745 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:07:49.745 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:07:49.745 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 01:07:49.745 01:07:49.745 --- 10.0.0.3 ping statistics --- 01:07:49.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:49.745 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:07:49.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:07:49.745 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 01:07:49.745 01:07:49.745 --- 10.0.0.1 ping statistics --- 01:07:49.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:49.745 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=80688 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 80688 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 80688 ']' 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 01:07:49.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 01:07:49.745 11:04:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:07:50.004 [2024-07-22 11:04:54.987510] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:07:50.004 [2024-07-22 11:04:54.987594] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:07:50.004 [2024-07-22 11:04:55.131645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:07:50.004 [2024-07-22 11:04:55.176919] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:07:50.004 [2024-07-22 11:04:55.176972] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:07:50.004 [2024-07-22 11:04:55.176982] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:07:50.004 [2024-07-22 11:04:55.176990] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:07:50.004 [2024-07-22 11:04:55.176997] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:07:50.004 [2024-07-22 11:04:55.177445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:07:50.004 [2024-07-22 11:04:55.177623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:07:50.004 [2024-07-22 11:04:55.177836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:07:50.004 [2024-07-22 11:04:55.177838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:07:50.262 [2024-07-22 11:04:55.219354] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:07:50.830 11:04:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:07:50.830 11:04:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 01:07:50.830 11:04:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:07:50.830 11:04:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 01:07:50.830 11:04:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:07:50.830 11:04:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:07:50.830 11:04:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:07:51.088 [2024-07-22 11:04:56.041043] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:07:51.088 11:04:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:07:51.346 11:04:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 01:07:51.346 11:04:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:07:51.604 11:04:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 01:07:51.604 11:04:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:07:51.863 11:04:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 01:07:51.863 11:04:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:07:51.863 11:04:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 01:07:51.863 11:04:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 01:07:52.122 11:04:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:07:52.380 11:04:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 01:07:52.380 11:04:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:07:52.638 11:04:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 01:07:52.638 11:04:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:07:52.897 11:04:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 01:07:52.897 11:04:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 01:07:52.897 11:04:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:07:53.155 11:04:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 01:07:53.155 11:04:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:07:53.414 11:04:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 01:07:53.414 11:04:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:07:53.673 11:04:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:07:53.673 [2024-07-22 11:04:58.852226] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:07:53.673 11:04:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 01:07:53.932 11:04:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 01:07:54.190 11:04:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid=7758934d-ca6b-403e-9e5d-3518ecb16acb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 01:07:54.449 11:04:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 01:07:54.449 11:04:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 01:07:54.449 11:04:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:07:54.449 11:04:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 01:07:54.449 11:04:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 01:07:54.449 11:04:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 01:07:56.350 11:05:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:07:56.350 11:05:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:07:56.350 11:05:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 01:07:56.350 11:05:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 01:07:56.350 11:05:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:07:56.350 11:05:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 01:07:56.350 11:05:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 01:07:56.350 [global] 01:07:56.350 thread=1 01:07:56.350 invalidate=1 01:07:56.350 rw=write 01:07:56.350 time_based=1 01:07:56.350 runtime=1 01:07:56.350 ioengine=libaio 01:07:56.350 direct=1 01:07:56.350 bs=4096 01:07:56.350 iodepth=1 01:07:56.350 norandommap=0 01:07:56.350 numjobs=1 01:07:56.350 01:07:56.350 verify_dump=1 01:07:56.350 verify_backlog=512 01:07:56.350 verify_state_save=0 01:07:56.350 do_verify=1 01:07:56.350 verify=crc32c-intel 01:07:56.350 [job0] 01:07:56.350 filename=/dev/nvme0n1 01:07:56.350 [job1] 01:07:56.350 filename=/dev/nvme0n2 01:07:56.350 [job2] 01:07:56.350 filename=/dev/nvme0n3 01:07:56.350 [job3] 01:07:56.350 filename=/dev/nvme0n4 01:07:56.608 Could not set queue depth (nvme0n1) 01:07:56.608 Could not set queue depth (nvme0n2) 01:07:56.608 Could not set queue depth (nvme0n3) 01:07:56.608 Could not set queue depth (nvme0n4) 01:07:56.608 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:07:56.608 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:07:56.608 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:07:56.608 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:07:56.608 fio-3.35 01:07:56.608 Starting 4 threads 01:07:57.984 01:07:57.984 job0: (groupid=0, jobs=1): err= 0: pid=80867: Mon Jul 22 11:05:02 2024 01:07:57.984 read: IOPS=3450, BW=13.5MiB/s (14.1MB/s)(13.5MiB/1001msec) 01:07:57.984 slat (nsec): min=7137, max=70509, avg=9223.98, stdev=3354.05 01:07:57.984 clat (usec): min=111, max=621, avg=150.35, stdev=41.58 01:07:57.984 lat (usec): min=118, max=640, avg=159.58, stdev=43.31 01:07:57.984 clat percentiles (usec): 01:07:57.984 | 1.00th=[ 119], 5.00th=[ 123], 10.00th=[ 125], 20.00th=[ 129], 01:07:57.984 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 141], 01:07:57.984 | 70.00th=[ 145], 80.00th=[ 153], 90.00th=[ 217], 95.00th=[ 233], 01:07:57.984 | 99.00th=[ 355], 99.50th=[ 396], 99.90th=[ 441], 99.95th=[ 465], 01:07:57.984 | 99.99th=[ 619] 01:07:57.984 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 01:07:57.984 slat (nsec): min=8468, max=63091, avg=14840.48, stdev=6937.65 01:07:57.984 clat (usec): min=65, max=271, avg=108.40, stdev=27.01 01:07:57.984 lat (usec): min=77, max=320, avg=123.24, stdev=31.48 01:07:57.984 clat percentiles (usec): 01:07:57.984 | 1.00th=[ 81], 5.00th=[ 85], 10.00th=[ 87], 20.00th=[ 90], 01:07:57.984 | 30.00th=[ 93], 40.00th=[ 96], 50.00th=[ 99], 60.00th=[ 103], 01:07:57.984 | 70.00th=[ 109], 80.00th=[ 119], 90.00th=[ 159], 95.00th=[ 172], 01:07:57.984 | 99.00th=[ 192], 99.50th=[ 200], 99.90th=[ 221], 99.95th=[ 258], 01:07:57.984 | 99.99th=[ 273] 01:07:57.984 bw ( KiB/s): min=12968, max=12968, per=27.91%, avg=12968.00, stdev= 0.00, samples=1 01:07:57.984 iops : min= 3242, max= 3242, avg=3242.00, stdev= 0.00, samples=1 01:07:57.984 lat (usec) : 100=26.97%, 250=72.01%, 500=1.01%, 750=0.01% 01:07:57.984 cpu : usr=1.30%, sys=7.20%, ctx=7038, majf=0, minf=4 01:07:57.984 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:07:57.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:07:57.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:07:57.984 issued rwts: total=3454,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 01:07:57.984 latency : target=0, window=0, percentile=100.00%, depth=1 01:07:57.984 job1: (groupid=0, jobs=1): err= 0: pid=80868: Mon Jul 22 11:05:02 2024 01:07:57.984 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 01:07:57.984 slat (nsec): min=7387, max=49806, avg=9873.70, stdev=5472.06 01:07:57.984 clat (usec): min=125, max=1421, avg=250.50, stdev=51.44 01:07:57.984 lat (usec): min=133, max=1429, avg=260.37, stdev=53.32 01:07:57.984 clat percentiles (usec): 01:07:57.984 | 1.00th=[ 202], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 227], 01:07:57.984 | 30.00th=[ 231], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 243], 01:07:57.984 | 70.00th=[ 249], 80.00th=[ 258], 90.00th=[ 306], 95.00th=[ 326], 01:07:57.984 | 99.00th=[ 433], 99.50th=[ 482], 99.90th=[ 709], 99.95th=[ 898], 01:07:57.984 | 99.99th=[ 1418] 01:07:57.984 write: IOPS=2283, BW=9135KiB/s (9354kB/s)(9144KiB/1001msec); 0 zone resets 01:07:57.984 slat (usec): min=11, max=100, avg=18.35, stdev=12.74 01:07:57.984 clat (usec): min=74, max=1495, avg=183.61, stdev=56.39 01:07:57.984 lat (usec): min=93, max=1562, avg=201.96, stdev=65.77 01:07:57.984 clat percentiles (usec): 01:07:57.984 | 1.00th=[ 100], 5.00th=[ 108], 10.00th=[ 127], 20.00th=[ 159], 01:07:57.984 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 180], 01:07:57.984 | 70.00th=[ 184], 80.00th=[ 198], 90.00th=[ 258], 95.00th=[ 297], 01:07:57.984 | 99.00th=[ 338], 99.50th=[ 359], 99.90th=[ 375], 99.95th=[ 420], 01:07:57.984 | 99.99th=[ 1500] 01:07:57.984 bw ( KiB/s): min= 8192, max= 8192, per=17.63%, avg=8192.00, stdev= 0.00, samples=1 01:07:57.984 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 01:07:57.984 lat (usec) : 100=0.65%, 250=80.39%, 500=18.76%, 750=0.14%, 1000=0.02% 01:07:57.984 lat (msec) : 2=0.05% 01:07:57.984 cpu : usr=0.90%, sys=5.40%, ctx=4336, majf=0, minf=9 01:07:57.984 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:07:57.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:07:57.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:07:57.984 issued rwts: total=2048,2286,0,0 short=0,0,0,0 dropped=0,0,0,0 01:07:57.984 latency : target=0, window=0, percentile=100.00%, depth=1 01:07:57.984 job2: (groupid=0, jobs=1): err= 0: pid=80869: Mon Jul 22 11:05:02 2024 01:07:57.984 read: IOPS=3218, BW=12.6MiB/s (13.2MB/s)(12.6MiB/1001msec) 01:07:57.984 slat (nsec): min=7397, max=45520, avg=8280.16, stdev=1765.02 01:07:57.984 clat (usec): min=117, max=6195, avg=160.02, stdev=161.98 01:07:57.984 lat (usec): min=126, max=6203, avg=168.30, stdev=162.26 01:07:57.984 clat percentiles (usec): 01:07:57.984 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 139], 01:07:57.984 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 147], 01:07:57.984 | 70.00th=[ 151], 80.00th=[ 157], 90.00th=[ 192], 95.00th=[ 229], 01:07:57.984 | 99.00th=[ 260], 99.50th=[ 277], 99.90th=[ 3326], 99.95th=[ 4080], 01:07:57.984 | 99.99th=[ 6194] 01:07:57.984 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 01:07:57.984 slat (nsec): min=11316, max=97181, avg=13151.93, stdev=5074.84 01:07:57.984 clat (usec): min=81, max=1788, avg=112.79, stdev=39.93 01:07:57.984 lat (usec): min=92, max=1800, avg=125.94, stdev=41.55 01:07:57.984 clat percentiles (usec): 01:07:57.984 | 1.00th=[ 87], 5.00th=[ 91], 10.00th=[ 93], 20.00th=[ 96], 01:07:57.984 | 30.00th=[ 98], 40.00th=[ 100], 50.00th=[ 103], 60.00th=[ 106], 01:07:57.984 | 70.00th=[ 111], 80.00th=[ 117], 90.00th=[ 159], 95.00th=[ 182], 01:07:57.984 | 99.00th=[ 217], 99.50th=[ 231], 99.90th=[ 253], 99.95th=[ 506], 01:07:57.984 | 99.99th=[ 1795] 01:07:57.984 bw ( KiB/s): min=12288, max=12288, per=26.45%, avg=12288.00, stdev= 0.00, samples=1 01:07:57.984 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 01:07:57.984 lat (usec) : 100=20.56%, 250=78.58%, 500=0.69%, 750=0.04%, 1000=0.01% 01:07:57.984 lat (msec) : 2=0.04%, 4=0.04%, 10=0.03% 01:07:57.984 cpu : usr=1.70%, sys=5.70%, ctx=6807, majf=0, minf=11 01:07:57.984 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:07:57.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:07:57.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:07:57.984 issued rwts: total=3222,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 01:07:57.984 latency : target=0, window=0, percentile=100.00%, depth=1 01:07:57.984 job3: (groupid=0, jobs=1): err= 0: pid=80870: Mon Jul 22 11:05:02 2024 01:07:57.984 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 01:07:57.984 slat (usec): min=7, max=387, avg=10.20, stdev=10.32 01:07:57.984 clat (usec): min=126, max=948, avg=250.06, stdev=45.84 01:07:57.984 lat (usec): min=135, max=956, avg=260.26, stdev=50.12 01:07:57.984 clat percentiles (usec): 01:07:57.984 | 1.00th=[ 200], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 227], 01:07:57.984 | 30.00th=[ 231], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 243], 01:07:57.984 | 70.00th=[ 249], 80.00th=[ 262], 90.00th=[ 297], 95.00th=[ 318], 01:07:57.984 | 99.00th=[ 437], 99.50th=[ 529], 99.90th=[ 611], 99.95th=[ 660], 01:07:57.984 | 99.99th=[ 947] 01:07:57.984 write: IOPS=2171, BW=8687KiB/s (8896kB/s)(8696KiB/1001msec); 0 zone resets 01:07:57.984 slat (usec): min=11, max=119, avg=19.89, stdev=14.37 01:07:57.984 clat (usec): min=99, max=587, avg=192.60, stdev=60.26 01:07:57.984 lat (usec): min=114, max=632, avg=212.49, stdev=71.78 01:07:57.984 clat percentiles (usec): 01:07:57.984 | 1.00th=[ 111], 5.00th=[ 122], 10.00th=[ 151], 20.00th=[ 161], 01:07:57.984 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 01:07:57.984 | 70.00th=[ 188], 80.00th=[ 206], 90.00th=[ 289], 95.00th=[ 322], 01:07:57.984 | 99.00th=[ 412], 99.50th=[ 433], 99.90th=[ 515], 99.95th=[ 519], 01:07:57.984 | 99.99th=[ 586] 01:07:57.984 bw ( KiB/s): min= 8192, max= 8192, per=17.63%, avg=8192.00, stdev= 0.00, samples=1 01:07:57.984 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 01:07:57.984 lat (usec) : 100=0.02%, 250=78.59%, 500=21.01%, 750=0.36%, 1000=0.02% 01:07:57.984 cpu : usr=1.30%, sys=5.10%, ctx=4229, majf=0, minf=13 01:07:57.984 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:07:57.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:07:57.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:07:57.984 issued rwts: total=2048,2174,0,0 short=0,0,0,0 dropped=0,0,0,0 01:07:57.984 latency : target=0, window=0, percentile=100.00%, depth=1 01:07:57.984 01:07:57.984 Run status group 0 (all jobs): 01:07:57.984 READ: bw=42.0MiB/s (44.1MB/s), 8184KiB/s-13.5MiB/s (8380kB/s-14.1MB/s), io=42.1MiB (44.1MB), run=1001-1001msec 01:07:57.984 WRITE: bw=45.4MiB/s (47.6MB/s), 8687KiB/s-14.0MiB/s (8896kB/s-14.7MB/s), io=45.4MiB (47.6MB), run=1001-1001msec 01:07:57.984 01:07:57.984 Disk stats (read/write): 01:07:57.984 nvme0n1: ios=2986/3072, merge=0/0, ticks=483/361, in_queue=844, util=88.77% 01:07:57.984 nvme0n2: ios=1739/2048, merge=0/0, ticks=466/395, in_queue=861, util=89.89% 01:07:57.984 nvme0n3: ios=2773/3072, merge=0/0, ticks=462/365, in_queue=827, util=89.32% 01:07:57.984 nvme0n4: ios=1652/2048, merge=0/0, ticks=451/406, in_queue=857, util=90.30% 01:07:57.984 11:05:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 01:07:57.984 [global] 01:07:57.984 thread=1 01:07:57.984 invalidate=1 01:07:57.984 rw=randwrite 01:07:57.984 time_based=1 01:07:57.984 runtime=1 01:07:57.984 ioengine=libaio 01:07:57.984 direct=1 01:07:57.984 bs=4096 01:07:57.984 iodepth=1 01:07:57.984 norandommap=0 01:07:57.984 numjobs=1 01:07:57.984 01:07:57.984 verify_dump=1 01:07:57.984 verify_backlog=512 01:07:57.984 verify_state_save=0 01:07:57.984 do_verify=1 01:07:57.984 verify=crc32c-intel 01:07:57.984 [job0] 01:07:57.984 filename=/dev/nvme0n1 01:07:57.984 [job1] 01:07:57.984 filename=/dev/nvme0n2 01:07:57.984 [job2] 01:07:57.984 filename=/dev/nvme0n3 01:07:57.984 [job3] 01:07:57.984 filename=/dev/nvme0n4 01:07:57.984 Could not set queue depth (nvme0n1) 01:07:57.984 Could not set queue depth (nvme0n2) 01:07:57.984 Could not set queue depth (nvme0n3) 01:07:57.984 Could not set queue depth (nvme0n4) 01:07:57.984 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:07:57.984 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:07:57.984 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:07:57.984 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:07:57.984 fio-3.35 01:07:57.984 Starting 4 threads 01:07:59.357 01:07:59.357 job0: (groupid=0, jobs=1): err= 0: pid=80927: Mon Jul 22 11:05:04 2024 01:07:59.357 read: IOPS=3775, BW=14.7MiB/s (15.5MB/s)(14.8MiB/1001msec) 01:07:59.357 slat (nsec): min=6986, max=25846, avg=8028.07, stdev=1616.51 01:07:59.357 clat (usec): min=114, max=1652, avg=136.57, stdev=29.02 01:07:59.357 lat (usec): min=121, max=1659, avg=144.60, stdev=29.19 01:07:59.357 clat percentiles (usec): 01:07:59.357 | 1.00th=[ 120], 5.00th=[ 123], 10.00th=[ 126], 20.00th=[ 128], 01:07:59.357 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 137], 01:07:59.357 | 70.00th=[ 141], 80.00th=[ 143], 90.00th=[ 149], 95.00th=[ 155], 01:07:59.357 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 441], 99.95th=[ 519], 01:07:59.357 | 99.99th=[ 1647] 01:07:59.357 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 01:07:59.357 slat (nsec): min=10215, max=97160, avg=13403.89, stdev=3242.57 01:07:59.357 clat (usec): min=71, max=164, avg=95.46, stdev= 9.80 01:07:59.357 lat (usec): min=83, max=261, avg=108.86, stdev=11.39 01:07:59.357 clat percentiles (usec): 01:07:59.357 | 1.00th=[ 78], 5.00th=[ 83], 10.00th=[ 85], 20.00th=[ 88], 01:07:59.357 | 30.00th=[ 90], 40.00th=[ 92], 50.00th=[ 94], 60.00th=[ 96], 01:07:59.357 | 70.00th=[ 99], 80.00th=[ 103], 90.00th=[ 110], 95.00th=[ 115], 01:07:59.357 | 99.00th=[ 125], 99.50th=[ 129], 99.90th=[ 145], 99.95th=[ 151], 01:07:59.357 | 99.99th=[ 165] 01:07:59.357 bw ( KiB/s): min=16384, max=16384, per=31.75%, avg=16384.00, stdev= 0.00, samples=1 01:07:59.357 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 01:07:59.357 lat (usec) : 100=37.85%, 250=62.04%, 500=0.08%, 750=0.01% 01:07:59.357 lat (msec) : 2=0.01% 01:07:59.357 cpu : usr=1.70%, sys=7.10%, ctx=7876, majf=0, minf=5 01:07:59.357 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:07:59.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:07:59.357 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:07:59.357 issued rwts: total=3779,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 01:07:59.357 latency : target=0, window=0, percentile=100.00%, depth=1 01:07:59.357 job1: (groupid=0, jobs=1): err= 0: pid=80928: Mon Jul 22 11:05:04 2024 01:07:59.357 read: IOPS=2400, BW=9602KiB/s (9833kB/s)(9612KiB/1001msec) 01:07:59.357 slat (nsec): min=7169, max=35076, avg=7823.60, stdev=1246.12 01:07:59.357 clat (usec): min=179, max=314, avg=214.94, stdev=15.23 01:07:59.357 lat (usec): min=187, max=322, avg=222.76, stdev=15.29 01:07:59.357 clat percentiles (usec): 01:07:59.357 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 204], 01:07:59.357 | 30.00th=[ 208], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 217], 01:07:59.357 | 70.00th=[ 221], 80.00th=[ 227], 90.00th=[ 233], 95.00th=[ 239], 01:07:59.357 | 99.00th=[ 258], 99.50th=[ 293], 99.90th=[ 310], 99.95th=[ 310], 01:07:59.357 | 99.99th=[ 314] 01:07:59.357 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 01:07:59.357 slat (usec): min=7, max=111, avg=12.70, stdev= 4.88 01:07:59.357 clat (usec): min=94, max=394, avg=167.04, stdev=15.98 01:07:59.357 lat (usec): min=135, max=449, avg=179.75, stdev=17.76 01:07:59.357 clat percentiles (usec): 01:07:59.357 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 01:07:59.357 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 01:07:59.357 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 184], 95.00th=[ 188], 01:07:59.357 | 99.00th=[ 208], 99.50th=[ 229], 99.90th=[ 338], 99.95th=[ 375], 01:07:59.357 | 99.99th=[ 396] 01:07:59.357 bw ( KiB/s): min=11712, max=11712, per=22.69%, avg=11712.00, stdev= 0.00, samples=1 01:07:59.358 iops : min= 2928, max= 2928, avg=2928.00, stdev= 0.00, samples=1 01:07:59.358 lat (usec) : 100=0.06%, 250=98.85%, 500=1.09% 01:07:59.358 cpu : usr=1.60%, sys=4.30%, ctx=4964, majf=0, minf=11 01:07:59.358 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:07:59.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:07:59.358 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:07:59.358 issued rwts: total=2403,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 01:07:59.358 latency : target=0, window=0, percentile=100.00%, depth=1 01:07:59.358 job2: (groupid=0, jobs=1): err= 0: pid=80929: Mon Jul 22 11:05:04 2024 01:07:59.358 read: IOPS=2402, BW=9610KiB/s (9841kB/s)(9620KiB/1001msec) 01:07:59.358 slat (nsec): min=5774, max=19537, avg=6640.19, stdev=859.73 01:07:59.358 clat (usec): min=120, max=315, avg=216.21, stdev=15.21 01:07:59.358 lat (usec): min=133, max=322, avg=222.85, stdev=15.36 01:07:59.358 clat percentiles (usec): 01:07:59.358 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 204], 01:07:59.358 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 219], 01:07:59.358 | 70.00th=[ 223], 80.00th=[ 227], 90.00th=[ 235], 95.00th=[ 241], 01:07:59.358 | 99.00th=[ 258], 99.50th=[ 293], 99.90th=[ 310], 99.95th=[ 314], 01:07:59.358 | 99.99th=[ 318] 01:07:59.358 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 01:07:59.358 slat (nsec): min=7280, max=78534, avg=11773.85, stdev=5517.55 01:07:59.358 clat (usec): min=96, max=398, avg=167.99, stdev=15.60 01:07:59.358 lat (usec): min=137, max=408, avg=179.77, stdev=16.98 01:07:59.358 clat percentiles (usec): 01:07:59.358 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 01:07:59.358 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 01:07:59.358 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 184], 95.00th=[ 188], 01:07:59.358 | 99.00th=[ 206], 99.50th=[ 223], 99.90th=[ 334], 99.95th=[ 379], 01:07:59.358 | 99.99th=[ 400] 01:07:59.358 bw ( KiB/s): min=11735, max=11735, per=22.74%, avg=11735.00, stdev= 0.00, samples=1 01:07:59.358 iops : min= 2933, max= 2933, avg=2933.00, stdev= 0.00, samples=1 01:07:59.358 lat (usec) : 100=0.02%, 250=98.99%, 500=0.99% 01:07:59.358 cpu : usr=1.00%, sys=3.90%, ctx=4966, majf=0, minf=14 01:07:59.358 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:07:59.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:07:59.358 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:07:59.358 issued rwts: total=2405,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 01:07:59.358 latency : target=0, window=0, percentile=100.00%, depth=1 01:07:59.358 job3: (groupid=0, jobs=1): err= 0: pid=80931: Mon Jul 22 11:05:04 2024 01:07:59.358 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 01:07:59.358 slat (nsec): min=6930, max=47883, avg=7702.90, stdev=1291.15 01:07:59.358 clat (usec): min=116, max=2037, avg=144.72, stdev=34.28 01:07:59.358 lat (usec): min=128, max=2045, avg=152.42, stdev=34.31 01:07:59.358 clat percentiles (usec): 01:07:59.358 | 1.00th=[ 127], 5.00th=[ 131], 10.00th=[ 133], 20.00th=[ 137], 01:07:59.358 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 145], 01:07:59.358 | 70.00th=[ 149], 80.00th=[ 153], 90.00th=[ 157], 95.00th=[ 163], 01:07:59.358 | 99.00th=[ 174], 99.50th=[ 178], 99.90th=[ 371], 99.95th=[ 537], 01:07:59.358 | 99.99th=[ 2040] 01:07:59.358 write: IOPS=3695, BW=14.4MiB/s (15.1MB/s)(14.4MiB/1001msec); 0 zone resets 01:07:59.358 slat (usec): min=8, max=101, avg=13.11, stdev= 5.12 01:07:59.358 clat (usec): min=78, max=206, avg=107.72, stdev=11.46 01:07:59.358 lat (usec): min=90, max=308, avg=120.83, stdev=13.58 01:07:59.358 clat percentiles (usec): 01:07:59.358 | 1.00th=[ 89], 5.00th=[ 93], 10.00th=[ 95], 20.00th=[ 98], 01:07:59.358 | 30.00th=[ 101], 40.00th=[ 103], 50.00th=[ 106], 60.00th=[ 110], 01:07:59.358 | 70.00th=[ 113], 80.00th=[ 117], 90.00th=[ 124], 95.00th=[ 130], 01:07:59.358 | 99.00th=[ 141], 99.50th=[ 147], 99.90th=[ 161], 99.95th=[ 174], 01:07:59.358 | 99.99th=[ 208] 01:07:59.358 bw ( KiB/s): min=16384, max=16384, per=31.75%, avg=16384.00, stdev= 0.00, samples=1 01:07:59.358 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 01:07:59.358 lat (usec) : 100=13.92%, 250=86.01%, 500=0.04%, 750=0.01% 01:07:59.358 lat (msec) : 4=0.01% 01:07:59.358 cpu : usr=1.90%, sys=6.00%, ctx=7284, majf=0, minf=15 01:07:59.358 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:07:59.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:07:59.358 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:07:59.358 issued rwts: total=3584,3699,0,0 short=0,0,0,0 dropped=0,0,0,0 01:07:59.358 latency : target=0, window=0, percentile=100.00%, depth=1 01:07:59.358 01:07:59.358 Run status group 0 (all jobs): 01:07:59.358 READ: bw=47.5MiB/s (49.8MB/s), 9602KiB/s-14.7MiB/s (9833kB/s-15.5MB/s), io=47.5MiB (49.9MB), run=1001-1001msec 01:07:59.358 WRITE: bw=50.4MiB/s (52.8MB/s), 9.99MiB/s-16.0MiB/s (10.5MB/s-16.8MB/s), io=50.4MiB (52.9MB), run=1001-1001msec 01:07:59.358 01:07:59.358 Disk stats (read/write): 01:07:59.358 nvme0n1: ios=3285/3584, merge=0/0, ticks=448/359, in_queue=807, util=88.26% 01:07:59.358 nvme0n2: ios=2097/2266, merge=0/0, ticks=461/373, in_queue=834, util=89.19% 01:07:59.358 nvme0n3: ios=2054/2266, merge=0/0, ticks=418/369, in_queue=787, util=88.93% 01:07:59.358 nvme0n4: ios=3072/3239, merge=0/0, ticks=441/371, in_queue=812, util=89.87% 01:07:59.358 11:05:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 01:07:59.358 [global] 01:07:59.358 thread=1 01:07:59.358 invalidate=1 01:07:59.358 rw=write 01:07:59.358 time_based=1 01:07:59.358 runtime=1 01:07:59.358 ioengine=libaio 01:07:59.358 direct=1 01:07:59.358 bs=4096 01:07:59.358 iodepth=128 01:07:59.358 norandommap=0 01:07:59.358 numjobs=1 01:07:59.358 01:07:59.358 verify_dump=1 01:07:59.358 verify_backlog=512 01:07:59.358 verify_state_save=0 01:07:59.358 do_verify=1 01:07:59.358 verify=crc32c-intel 01:07:59.358 [job0] 01:07:59.358 filename=/dev/nvme0n1 01:07:59.358 [job1] 01:07:59.358 filename=/dev/nvme0n2 01:07:59.358 [job2] 01:07:59.358 filename=/dev/nvme0n3 01:07:59.358 [job3] 01:07:59.358 filename=/dev/nvme0n4 01:07:59.358 Could not set queue depth (nvme0n1) 01:07:59.358 Could not set queue depth (nvme0n2) 01:07:59.358 Could not set queue depth (nvme0n3) 01:07:59.358 Could not set queue depth (nvme0n4) 01:07:59.358 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:07:59.358 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:07:59.358 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:07:59.358 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:07:59.358 fio-3.35 01:07:59.358 Starting 4 threads 01:08:00.733 01:08:00.733 job0: (groupid=0, jobs=1): err= 0: pid=80990: Mon Jul 22 11:05:05 2024 01:08:00.733 read: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(9.95MiB/1005msec) 01:08:00.733 slat (usec): min=15, max=9659, avg=209.56, stdev=801.54 01:08:00.733 clat (usec): min=2688, max=51742, avg=26997.43, stdev=8195.15 01:08:00.733 lat (usec): min=6345, max=51762, avg=27206.99, stdev=8234.72 01:08:00.733 clat percentiles (usec): 01:08:00.733 | 1.00th=[11731], 5.00th=[18744], 10.00th=[20055], 20.00th=[21365], 01:08:00.733 | 30.00th=[21890], 40.00th=[22152], 50.00th=[22938], 60.00th=[25560], 01:08:00.733 | 70.00th=[30540], 80.00th=[33817], 90.00th=[39060], 95.00th=[44827], 01:08:00.733 | 99.00th=[50070], 99.50th=[51643], 99.90th=[51643], 99.95th=[51643], 01:08:00.733 | 99.99th=[51643] 01:08:00.733 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 01:08:00.733 slat (usec): min=9, max=6792, avg=169.94, stdev=652.91 01:08:00.733 clat (usec): min=13652, max=38528, avg=22641.48, stdev=4230.77 01:08:00.733 lat (usec): min=13692, max=38561, avg=22811.42, stdev=4236.91 01:08:00.733 clat percentiles (usec): 01:08:00.733 | 1.00th=[15139], 5.00th=[16188], 10.00th=[18482], 20.00th=[20055], 01:08:00.733 | 30.00th=[20317], 40.00th=[20841], 50.00th=[21365], 60.00th=[22152], 01:08:00.733 | 70.00th=[23200], 80.00th=[25822], 90.00th=[29492], 95.00th=[31851], 01:08:00.733 | 99.00th=[33817], 99.50th=[35390], 99.90th=[37487], 99.95th=[38011], 01:08:00.733 | 99.99th=[38536] 01:08:00.733 bw ( KiB/s): min= 8192, max=12288, per=14.72%, avg=10240.00, stdev=2896.31, samples=2 01:08:00.733 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 01:08:00.733 lat (msec) : 4=0.02%, 10=0.33%, 20=15.18%, 50=83.79%, 100=0.69% 01:08:00.733 cpu : usr=3.98%, sys=10.06%, ctx=635, majf=0, minf=19 01:08:00.733 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 01:08:00.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:00.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:08:00.733 issued rwts: total=2547,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 01:08:00.733 latency : target=0, window=0, percentile=100.00%, depth=128 01:08:00.733 job1: (groupid=0, jobs=1): err= 0: pid=80991: Mon Jul 22 11:05:05 2024 01:08:00.733 read: IOPS=6843, BW=26.7MiB/s (28.0MB/s)(26.8MiB/1002msec) 01:08:00.733 slat (usec): min=15, max=3408, avg=68.03, stdev=257.59 01:08:00.733 clat (usec): min=420, max=13022, avg=9377.53, stdev=1168.93 01:08:00.733 lat (usec): min=923, max=15062, avg=9445.56, stdev=1179.39 01:08:00.733 clat percentiles (usec): 01:08:00.733 | 1.00th=[ 5276], 5.00th=[ 8029], 10.00th=[ 8094], 20.00th=[ 8356], 01:08:00.733 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9634], 60.00th=[10028], 01:08:00.733 | 70.00th=[10159], 80.00th=[10290], 90.00th=[10552], 95.00th=[10814], 01:08:00.733 | 99.00th=[11731], 99.50th=[11994], 99.90th=[12649], 99.95th=[12911], 01:08:00.733 | 99.99th=[13042] 01:08:00.733 write: IOPS=7153, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1002msec); 0 zone resets 01:08:00.733 slat (usec): min=19, max=2966, avg=63.76, stdev=242.39 01:08:00.733 clat (usec): min=5868, max=12874, avg=8704.55, stdev=1085.85 01:08:00.733 lat (usec): min=5901, max=12923, avg=8768.30, stdev=1115.47 01:08:00.733 clat percentiles (usec): 01:08:00.733 | 1.00th=[ 6718], 5.00th=[ 7177], 10.00th=[ 7373], 20.00th=[ 7570], 01:08:00.733 | 30.00th=[ 7832], 40.00th=[ 8160], 50.00th=[ 8979], 60.00th=[ 9241], 01:08:00.733 | 70.00th=[ 9503], 80.00th=[ 9634], 90.00th=[ 9896], 95.00th=[10290], 01:08:00.733 | 99.00th=[11207], 99.50th=[11731], 99.90th=[12518], 99.95th=[12649], 01:08:00.733 | 99.99th=[12911] 01:08:00.733 bw ( KiB/s): min=26268, max=31128, per=41.26%, avg=28698.00, stdev=3436.54, samples=2 01:08:00.733 iops : min= 6567, max= 7782, avg=7174.50, stdev=859.13, samples=2 01:08:00.733 lat (usec) : 500=0.01%, 1000=0.02% 01:08:00.733 lat (msec) : 2=0.08%, 4=0.15%, 10=77.64%, 20=22.10% 01:08:00.733 cpu : usr=7.79%, sys=26.97%, ctx=419, majf=0, minf=11 01:08:00.733 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 01:08:00.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:00.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:08:00.733 issued rwts: total=6857,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 01:08:00.733 latency : target=0, window=0, percentile=100.00%, depth=128 01:08:00.733 job2: (groupid=0, jobs=1): err= 0: pid=80992: Mon Jul 22 11:05:05 2024 01:08:00.733 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 01:08:00.733 slat (usec): min=7, max=12153, avg=192.32, stdev=773.73 01:08:00.733 clat (usec): min=15891, max=49263, avg=25418.49, stdev=6865.32 01:08:00.733 lat (usec): min=16186, max=49284, avg=25610.81, stdev=6916.55 01:08:00.733 clat percentiles (usec): 01:08:00.733 | 1.00th=[16450], 5.00th=[18482], 10.00th=[19792], 20.00th=[20841], 01:08:00.733 | 30.00th=[21627], 40.00th=[21890], 50.00th=[22414], 60.00th=[23200], 01:08:00.733 | 70.00th=[25822], 80.00th=[31851], 90.00th=[36439], 95.00th=[39584], 01:08:00.733 | 99.00th=[46400], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021], 01:08:00.733 | 99.99th=[49021] 01:08:00.733 write: IOPS=2906, BW=11.4MiB/s (11.9MB/s)(11.4MiB/1002msec); 0 zone resets 01:08:00.733 slat (usec): min=21, max=6053, avg=163.25, stdev=622.89 01:08:00.733 clat (usec): min=1490, max=39088, avg=21020.60, stdev=5277.52 01:08:00.733 lat (usec): min=1537, max=39125, avg=21183.85, stdev=5292.47 01:08:00.733 clat percentiles (usec): 01:08:00.733 | 1.00th=[ 6063], 5.00th=[14615], 10.00th=[15664], 20.00th=[17171], 01:08:00.733 | 30.00th=[18482], 40.00th=[20055], 50.00th=[20317], 60.00th=[21103], 01:08:00.733 | 70.00th=[21890], 80.00th=[23725], 90.00th=[28181], 95.00th=[32637], 01:08:00.733 | 99.00th=[37487], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 01:08:00.733 | 99.99th=[39060] 01:08:00.733 bw ( KiB/s): min=12288, max=12288, per=17.67%, avg=12288.00, stdev= 0.00, samples=1 01:08:00.733 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 01:08:00.733 lat (msec) : 2=0.04%, 4=0.20%, 10=0.38%, 20=26.88%, 50=72.50% 01:08:00.733 cpu : usr=3.60%, sys=12.09%, ctx=600, majf=0, minf=13 01:08:00.733 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 01:08:00.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:00.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:08:00.733 issued rwts: total=2560,2912,0,0 short=0,0,0,0 dropped=0,0,0,0 01:08:00.733 latency : target=0, window=0, percentile=100.00%, depth=128 01:08:00.733 job3: (groupid=0, jobs=1): err= 0: pid=80993: Mon Jul 22 11:05:05 2024 01:08:00.733 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 01:08:00.733 slat (usec): min=5, max=10186, avg=107.38, stdev=507.98 01:08:00.733 clat (usec): min=9593, max=38889, avg=14424.49, stdev=5589.42 01:08:00.733 lat (usec): min=9620, max=38909, avg=14531.87, stdev=5622.30 01:08:00.733 clat percentiles (usec): 01:08:00.733 | 1.00th=[ 9896], 5.00th=[10552], 10.00th=[10814], 20.00th=[11207], 01:08:00.733 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11731], 60.00th=[11863], 01:08:00.733 | 70.00th=[13042], 80.00th=[18744], 90.00th=[21890], 95.00th=[25297], 01:08:00.733 | 99.00th=[38011], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 01:08:00.734 | 99.99th=[39060] 01:08:00.734 write: IOPS=4822, BW=18.8MiB/s (19.8MB/s)(18.9MiB/1003msec); 0 zone resets 01:08:00.734 slat (usec): min=18, max=9037, avg=93.19, stdev=386.70 01:08:00.734 clat (usec): min=1922, max=24331, avg=12457.25, stdev=2774.58 01:08:00.734 lat (usec): min=4270, max=24364, avg=12550.44, stdev=2793.24 01:08:00.734 clat percentiles (usec): 01:08:00.734 | 1.00th=[ 9372], 5.00th=[10290], 10.00th=[10552], 20.00th=[10683], 01:08:00.734 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 01:08:00.734 | 70.00th=[12518], 80.00th=[14615], 90.00th=[16581], 95.00th=[17957], 01:08:00.734 | 99.00th=[21627], 99.50th=[23725], 99.90th=[23725], 99.95th=[24249], 01:08:00.734 | 99.99th=[24249] 01:08:00.734 bw ( KiB/s): min=13936, max=23791, per=27.12%, avg=18863.50, stdev=6968.54, samples=2 01:08:00.734 iops : min= 3484, max= 5947, avg=4715.50, stdev=1741.60, samples=2 01:08:00.734 lat (msec) : 2=0.01%, 10=1.99%, 20=87.90%, 50=10.10% 01:08:00.734 cpu : usr=5.59%, sys=19.26%, ctx=363, majf=0, minf=9 01:08:00.734 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 01:08:00.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:00.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:08:00.734 issued rwts: total=4608,4837,0,0 short=0,0,0,0 dropped=0,0,0,0 01:08:00.734 latency : target=0, window=0, percentile=100.00%, depth=128 01:08:00.734 01:08:00.734 Run status group 0 (all jobs): 01:08:00.734 READ: bw=64.4MiB/s (67.5MB/s), 9.90MiB/s-26.7MiB/s (10.4MB/s-28.0MB/s), io=64.7MiB (67.9MB), run=1002-1005msec 01:08:00.734 WRITE: bw=67.9MiB/s (71.2MB/s), 9.95MiB/s-27.9MiB/s (10.4MB/s-29.3MB/s), io=68.3MiB (71.6MB), run=1002-1005msec 01:08:00.734 01:08:00.734 Disk stats (read/write): 01:08:00.734 nvme0n1: ios=2098/2546, merge=0/0, ticks=16073/15940, in_queue=32013, util=86.06% 01:08:00.734 nvme0n2: ios=5807/6144, merge=0/0, ticks=25244/19256, in_queue=44500, util=88.89% 01:08:00.734 nvme0n3: ios=2312/2560, merge=0/0, ticks=17428/14684, in_queue=32112, util=88.11% 01:08:00.734 nvme0n4: ios=4096/4452, merge=0/0, ticks=15640/12349, in_queue=27989, util=89.27% 01:08:00.734 11:05:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 01:08:00.734 [global] 01:08:00.734 thread=1 01:08:00.734 invalidate=1 01:08:00.734 rw=randwrite 01:08:00.734 time_based=1 01:08:00.734 runtime=1 01:08:00.734 ioengine=libaio 01:08:00.734 direct=1 01:08:00.734 bs=4096 01:08:00.734 iodepth=128 01:08:00.734 norandommap=0 01:08:00.734 numjobs=1 01:08:00.734 01:08:00.734 verify_dump=1 01:08:00.734 verify_backlog=512 01:08:00.734 verify_state_save=0 01:08:00.734 do_verify=1 01:08:00.734 verify=crc32c-intel 01:08:00.734 [job0] 01:08:00.734 filename=/dev/nvme0n1 01:08:00.734 [job1] 01:08:00.734 filename=/dev/nvme0n2 01:08:00.734 [job2] 01:08:00.734 filename=/dev/nvme0n3 01:08:00.734 [job3] 01:08:00.734 filename=/dev/nvme0n4 01:08:00.734 Could not set queue depth (nvme0n1) 01:08:00.734 Could not set queue depth (nvme0n2) 01:08:00.734 Could not set queue depth (nvme0n3) 01:08:00.734 Could not set queue depth (nvme0n4) 01:08:00.734 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:08:00.734 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:08:00.734 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:08:00.734 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:08:00.734 fio-3.35 01:08:00.734 Starting 4 threads 01:08:02.112 01:08:02.112 job0: (groupid=0, jobs=1): err= 0: pid=81047: Mon Jul 22 11:05:07 2024 01:08:02.112 read: IOPS=4519, BW=17.7MiB/s (18.5MB/s)(17.7MiB/1002msec) 01:08:02.112 slat (usec): min=5, max=4634, avg=107.85, stdev=404.71 01:08:02.112 clat (usec): min=1136, max=28532, avg=14256.44, stdev=6125.72 01:08:02.112 lat (usec): min=4072, max=29015, avg=14364.29, stdev=6161.65 01:08:02.112 clat percentiles (usec): 01:08:02.112 | 1.00th=[ 8225], 5.00th=[ 9503], 10.00th=[ 9634], 20.00th=[ 9896], 01:08:02.112 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 01:08:02.112 | 70.00th=[18744], 80.00th=[22152], 90.00th=[23462], 95.00th=[25822], 01:08:02.112 | 99.00th=[27919], 99.50th=[28443], 99.90th=[28443], 99.95th=[28443], 01:08:02.112 | 99.99th=[28443] 01:08:02.112 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 01:08:02.112 slat (usec): min=10, max=5299, avg=99.61, stdev=330.65 01:08:02.112 clat (usec): min=7300, max=27698, avg=13479.16, stdev=5355.11 01:08:02.112 lat (usec): min=7344, max=27731, avg=13578.77, stdev=5387.82 01:08:02.112 clat percentiles (usec): 01:08:02.112 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9765], 01:08:02.112 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 01:08:02.112 | 70.00th=[16909], 80.00th=[20317], 90.00th=[22414], 95.00th=[23200], 01:08:02.112 | 99.00th=[25822], 99.50th=[26084], 99.90th=[27657], 99.95th=[27657], 01:08:02.112 | 99.99th=[27657] 01:08:02.112 bw ( KiB/s): min=12288, max=24625, per=25.57%, avg=18456.50, stdev=8723.58, samples=2 01:08:02.112 iops : min= 3072, max= 6156, avg=4614.00, stdev=2180.72, samples=2 01:08:02.112 lat (msec) : 2=0.01%, 10=32.10%, 20=43.46%, 50=24.43% 01:08:02.112 cpu : usr=6.49%, sys=17.08%, ctx=754, majf=0, minf=7 01:08:02.112 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 01:08:02.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:02.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:08:02.112 issued rwts: total=4529,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 01:08:02.112 latency : target=0, window=0, percentile=100.00%, depth=128 01:08:02.112 job1: (groupid=0, jobs=1): err= 0: pid=81048: Mon Jul 22 11:05:07 2024 01:08:02.112 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 01:08:02.112 slat (usec): min=11, max=4801, avg=102.74, stdev=380.12 01:08:02.112 clat (usec): min=6621, max=26049, avg=13869.78, stdev=5281.66 01:08:02.112 lat (usec): min=6640, max=26070, avg=13972.51, stdev=5310.95 01:08:02.112 clat percentiles (usec): 01:08:02.112 | 1.00th=[ 9110], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10290], 01:08:02.112 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 01:08:02.112 | 70.00th=[12256], 80.00th=[21627], 90.00th=[22414], 95.00th=[23200], 01:08:02.112 | 99.00th=[24511], 99.50th=[24773], 99.90th=[26084], 99.95th=[26084], 01:08:02.112 | 99.99th=[26084] 01:08:02.112 write: IOPS=4678, BW=18.3MiB/s (19.2MB/s)(18.3MiB/1001msec); 0 zone resets 01:08:02.112 slat (usec): min=19, max=5562, avg=100.30, stdev=432.77 01:08:02.112 clat (usec): min=430, max=23342, avg=13249.72, stdev=5086.89 01:08:02.112 lat (usec): min=500, max=24604, avg=13350.01, stdev=5110.45 01:08:02.112 clat percentiles (usec): 01:08:02.112 | 1.00th=[ 3490], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[ 9896], 01:08:02.112 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 01:08:02.112 | 70.00th=[19268], 80.00th=[19792], 90.00th=[20841], 95.00th=[21365], 01:08:02.112 | 99.00th=[23200], 99.50th=[23200], 99.90th=[23200], 99.95th=[23200], 01:08:02.112 | 99.99th=[23462] 01:08:02.112 bw ( KiB/s): min=12288, max=12288, per=17.03%, avg=12288.00, stdev= 0.00, samples=1 01:08:02.112 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 01:08:02.112 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.05% 01:08:02.112 lat (msec) : 2=0.24%, 4=0.28%, 10=21.52%, 20=55.77%, 50=22.06% 01:08:02.112 cpu : usr=5.90%, sys=19.30%, ctx=504, majf=0, minf=13 01:08:02.112 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 01:08:02.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:02.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:08:02.112 issued rwts: total=4608,4683,0,0 short=0,0,0,0 dropped=0,0,0,0 01:08:02.112 latency : target=0, window=0, percentile=100.00%, depth=128 01:08:02.112 job2: (groupid=0, jobs=1): err= 0: pid=81049: Mon Jul 22 11:05:07 2024 01:08:02.112 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 01:08:02.112 slat (usec): min=10, max=10135, avg=84.43, stdev=443.94 01:08:02.112 clat (usec): min=6366, max=22312, avg=11930.85, stdev=1777.08 01:08:02.112 lat (usec): min=6393, max=22337, avg=12015.29, stdev=1784.90 01:08:02.112 clat percentiles (usec): 01:08:02.112 | 1.00th=[ 8029], 5.00th=[ 9241], 10.00th=[10814], 20.00th=[11338], 01:08:02.112 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11863], 60.00th=[11863], 01:08:02.112 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12911], 95.00th=[13829], 01:08:02.112 | 99.00th=[20055], 99.50th=[21103], 99.90th=[21627], 99.95th=[21627], 01:08:02.112 | 99.99th=[22414] 01:08:02.112 write: IOPS=5709, BW=22.3MiB/s (23.4MB/s)(22.3MiB/1001msec); 0 zone resets 01:08:02.112 slat (usec): min=13, max=5958, avg=80.22, stdev=350.58 01:08:02.112 clat (usec): min=855, max=21600, avg=10437.98, stdev=1315.27 01:08:02.112 lat (usec): min=888, max=21615, avg=10518.20, stdev=1284.33 01:08:02.112 clat percentiles (usec): 01:08:02.112 | 1.00th=[ 4752], 5.00th=[ 8455], 10.00th=[ 9503], 20.00th=[ 9896], 01:08:02.112 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 01:08:02.112 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11994], 01:08:02.112 | 99.00th=[13173], 99.50th=[13304], 99.90th=[13566], 99.95th=[13566], 01:08:02.112 | 99.99th=[21627] 01:08:02.112 bw ( KiB/s): min=24576, max=24576, per=34.05%, avg=24576.00, stdev= 0.00, samples=1 01:08:02.112 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 01:08:02.112 lat (usec) : 1000=0.03% 01:08:02.112 lat (msec) : 2=0.01%, 4=0.05%, 10=13.41%, 20=86.01%, 50=0.49% 01:08:02.112 cpu : usr=6.80%, sys=22.60%, ctx=331, majf=0, minf=13 01:08:02.112 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 01:08:02.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:02.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:08:02.112 issued rwts: total=5632,5715,0,0 short=0,0,0,0 dropped=0,0,0,0 01:08:02.112 latency : target=0, window=0, percentile=100.00%, depth=128 01:08:02.112 job3: (groupid=0, jobs=1): err= 0: pid=81050: Mon Jul 22 11:05:07 2024 01:08:02.112 read: IOPS=2807, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1002msec) 01:08:02.112 slat (usec): min=5, max=6441, avg=168.90, stdev=595.12 01:08:02.112 clat (usec): min=927, max=29695, avg=21804.54, stdev=2937.73 01:08:02.112 lat (usec): min=3567, max=29728, avg=21973.44, stdev=2902.09 01:08:02.112 clat percentiles (usec): 01:08:02.112 | 1.00th=[ 5211], 5.00th=[18482], 10.00th=[19530], 20.00th=[20579], 01:08:02.112 | 30.00th=[21365], 40.00th=[22152], 50.00th=[22152], 60.00th=[22414], 01:08:02.112 | 70.00th=[22676], 80.00th=[23200], 90.00th=[24511], 95.00th=[25297], 01:08:02.112 | 99.00th=[26608], 99.50th=[26608], 99.90th=[29754], 99.95th=[29754], 01:08:02.112 | 99.99th=[29754] 01:08:02.112 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 01:08:02.112 slat (usec): min=6, max=5324, avg=159.17, stdev=587.92 01:08:02.112 clat (usec): min=14412, max=25547, avg=21120.37, stdev=1494.14 01:08:02.112 lat (usec): min=15930, max=25961, avg=21279.54, stdev=1415.41 01:08:02.112 clat percentiles (usec): 01:08:02.112 | 1.00th=[17171], 5.00th=[19006], 10.00th=[19530], 20.00th=[19792], 01:08:02.112 | 30.00th=[20317], 40.00th=[20841], 50.00th=[21103], 60.00th=[21365], 01:08:02.112 | 70.00th=[21890], 80.00th=[22414], 90.00th=[22676], 95.00th=[23725], 01:08:02.112 | 99.00th=[25035], 99.50th=[25297], 99.90th=[25560], 99.95th=[25560], 01:08:02.112 | 99.99th=[25560] 01:08:02.112 bw ( KiB/s): min=12288, max=12312, per=17.04%, avg=12300.00, stdev=16.97, samples=2 01:08:02.112 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 01:08:02.112 lat (usec) : 1000=0.02% 01:08:02.112 lat (msec) : 4=0.32%, 10=0.54%, 20=18.57%, 50=80.54% 01:08:02.112 cpu : usr=4.00%, sys=11.89%, ctx=731, majf=0, minf=17 01:08:02.112 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 01:08:02.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:02.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:08:02.112 issued rwts: total=2813,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 01:08:02.112 latency : target=0, window=0, percentile=100.00%, depth=128 01:08:02.112 01:08:02.112 Run status group 0 (all jobs): 01:08:02.112 READ: bw=68.5MiB/s (71.9MB/s), 11.0MiB/s-22.0MiB/s (11.5MB/s-23.0MB/s), io=68.7MiB (72.0MB), run=1001-1002msec 01:08:02.112 WRITE: bw=70.5MiB/s (73.9MB/s), 12.0MiB/s-22.3MiB/s (12.6MB/s-23.4MB/s), io=70.6MiB (74.0MB), run=1001-1002msec 01:08:02.112 01:08:02.112 Disk stats (read/write): 01:08:02.112 nvme0n1: ios=4146/4308, merge=0/0, ticks=12071/10687, in_queue=22758, util=89.07% 01:08:02.112 nvme0n2: ios=3633/4068, merge=0/0, ticks=12489/10609, in_queue=23098, util=87.99% 01:08:02.112 nvme0n3: ios=4763/5120, merge=0/0, ticks=52063/47061, in_queue=99124, util=90.06% 01:08:02.112 nvme0n4: ios=2548/2560, merge=0/0, ticks=13280/9698, in_queue=22978, util=87.85% 01:08:02.112 11:05:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 01:08:02.112 11:05:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=81063 01:08:02.112 11:05:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 01:08:02.112 11:05:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 01:08:02.112 [global] 01:08:02.112 thread=1 01:08:02.112 invalidate=1 01:08:02.112 rw=read 01:08:02.112 time_based=1 01:08:02.112 runtime=10 01:08:02.112 ioengine=libaio 01:08:02.112 direct=1 01:08:02.112 bs=4096 01:08:02.113 iodepth=1 01:08:02.113 norandommap=1 01:08:02.113 numjobs=1 01:08:02.113 01:08:02.113 [job0] 01:08:02.113 filename=/dev/nvme0n1 01:08:02.113 [job1] 01:08:02.113 filename=/dev/nvme0n2 01:08:02.113 [job2] 01:08:02.113 filename=/dev/nvme0n3 01:08:02.113 [job3] 01:08:02.113 filename=/dev/nvme0n4 01:08:02.395 Could not set queue depth (nvme0n1) 01:08:02.395 Could not set queue depth (nvme0n2) 01:08:02.395 Could not set queue depth (nvme0n3) 01:08:02.395 Could not set queue depth (nvme0n4) 01:08:02.395 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:08:02.395 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:08:02.395 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:08:02.395 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:08:02.395 fio-3.35 01:08:02.395 Starting 4 threads 01:08:05.679 11:05:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 01:08:05.679 fio: pid=81110, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 01:08:05.679 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=75264000, buflen=4096 01:08:05.679 11:05:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 01:08:05.679 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=51912704, buflen=4096 01:08:05.679 fio: pid=81109, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 01:08:05.679 11:05:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:08:05.679 11:05:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 01:08:05.679 fio: pid=81107, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 01:08:05.679 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=24788992, buflen=4096 01:08:05.938 11:05:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:08:05.938 11:05:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 01:08:05.938 fio: pid=81108, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 01:08:05.938 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=63414272, buflen=4096 01:08:05.938 01:08:05.938 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=81107: Mon Jul 22 11:05:11 2024 01:08:05.938 read: IOPS=6797, BW=26.5MiB/s (27.8MB/s)(87.6MiB/3301msec) 01:08:05.938 slat (usec): min=6, max=15858, avg=10.03, stdev=148.55 01:08:05.938 clat (usec): min=94, max=1906, avg=136.40, stdev=27.16 01:08:05.938 lat (usec): min=105, max=16106, avg=146.42, stdev=151.90 01:08:05.938 clat percentiles (usec): 01:08:05.938 | 1.00th=[ 117], 5.00th=[ 122], 10.00th=[ 124], 20.00th=[ 127], 01:08:05.938 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 137], 01:08:05.938 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 149], 95.00th=[ 155], 01:08:05.938 | 99.00th=[ 167], 99.50th=[ 176], 99.90th=[ 265], 99.95th=[ 553], 01:08:05.938 | 99.99th=[ 1336] 01:08:05.938 bw ( KiB/s): min=26784, max=28112, per=35.01%, avg=27456.00, stdev=501.59, samples=6 01:08:05.938 iops : min= 6696, max= 7028, avg=6864.00, stdev=125.40, samples=6 01:08:05.938 lat (usec) : 100=0.02%, 250=99.87%, 500=0.04%, 750=0.02%, 1000=0.02% 01:08:05.938 lat (msec) : 2=0.03% 01:08:05.938 cpu : usr=1.24%, sys=5.21%, ctx=22445, majf=0, minf=1 01:08:05.938 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:08:05.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:05.938 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:05.938 issued rwts: total=22437,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:08:05.938 latency : target=0, window=0, percentile=100.00%, depth=1 01:08:05.938 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=81108: Mon Jul 22 11:05:11 2024 01:08:05.938 read: IOPS=4401, BW=17.2MiB/s (18.0MB/s)(60.5MiB/3518msec) 01:08:05.938 slat (usec): min=6, max=11297, avg=11.66, stdev=172.84 01:08:05.938 clat (usec): min=16, max=5159, avg=214.74, stdev=73.21 01:08:05.938 lat (usec): min=97, max=11633, avg=226.40, stdev=187.59 01:08:05.938 clat percentiles (usec): 01:08:05.938 | 1.00th=[ 101], 5.00th=[ 112], 10.00th=[ 124], 20.00th=[ 180], 01:08:05.938 | 30.00th=[ 217], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 235], 01:08:05.938 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 255], 95.00th=[ 265], 01:08:05.938 | 99.00th=[ 281], 99.50th=[ 293], 99.90th=[ 660], 99.95th=[ 1012], 01:08:05.938 | 99.99th=[ 2606] 01:08:05.938 bw ( KiB/s): min=15408, max=16768, per=20.72%, avg=16246.67, stdev=489.26, samples=6 01:08:05.938 iops : min= 3852, max= 4192, avg=4061.67, stdev=122.31, samples=6 01:08:05.938 lat (usec) : 20=0.01%, 100=0.68%, 250=84.38%, 500=14.81%, 750=0.04% 01:08:05.938 lat (usec) : 1000=0.03% 01:08:05.938 lat (msec) : 2=0.03%, 4=0.01%, 10=0.01% 01:08:05.938 cpu : usr=0.97%, sys=3.33%, ctx=15492, majf=0, minf=1 01:08:05.938 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:08:05.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:05.938 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:05.938 issued rwts: total=15483,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:08:05.938 latency : target=0, window=0, percentile=100.00%, depth=1 01:08:05.938 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=81109: Mon Jul 22 11:05:11 2024 01:08:05.938 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(49.5MiB/3098msec) 01:08:05.938 slat (usec): min=7, max=11277, avg=10.21, stdev=121.10 01:08:05.939 clat (usec): min=43, max=1736, avg=233.33, stdev=30.30 01:08:05.939 lat (usec): min=120, max=11544, avg=243.54, stdev=125.38 01:08:05.939 clat percentiles (usec): 01:08:05.939 | 1.00th=[ 143], 5.00th=[ 204], 10.00th=[ 212], 20.00th=[ 221], 01:08:05.939 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 239], 01:08:05.939 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 258], 95.00th=[ 265], 01:08:05.939 | 99.00th=[ 285], 99.50th=[ 302], 99.90th=[ 429], 99.95th=[ 529], 01:08:05.939 | 99.99th=[ 1401] 01:08:05.939 bw ( KiB/s): min=15392, max=16960, per=20.88%, avg=16374.67, stdev=536.33, samples=6 01:08:05.939 iops : min= 3848, max= 4240, avg=4093.67, stdev=134.08, samples=6 01:08:05.939 lat (usec) : 50=0.01%, 250=82.56%, 500=17.36%, 750=0.02%, 1000=0.02% 01:08:05.939 lat (msec) : 2=0.02% 01:08:05.939 cpu : usr=0.87%, sys=3.23%, ctx=12682, majf=0, minf=1 01:08:05.939 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:08:05.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:05.939 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:05.939 issued rwts: total=12675,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:08:05.939 latency : target=0, window=0, percentile=100.00%, depth=1 01:08:05.939 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=81110: Mon Jul 22 11:05:11 2024 01:08:05.939 read: IOPS=6391, BW=25.0MiB/s (26.2MB/s)(71.8MiB/2875msec) 01:08:05.939 slat (nsec): min=7043, max=90813, avg=8034.69, stdev=2031.05 01:08:05.939 clat (usec): min=117, max=1955, avg=147.61, stdev=27.09 01:08:05.939 lat (usec): min=124, max=1969, avg=155.65, stdev=27.48 01:08:05.939 clat percentiles (usec): 01:08:05.939 | 1.00th=[ 126], 5.00th=[ 131], 10.00th=[ 135], 20.00th=[ 137], 01:08:05.939 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 149], 01:08:05.939 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 161], 95.00th=[ 167], 01:08:05.939 | 99.00th=[ 192], 99.50th=[ 255], 99.90th=[ 449], 99.95th=[ 474], 01:08:05.939 | 99.99th=[ 1483] 01:08:05.939 bw ( KiB/s): min=24176, max=26008, per=32.50%, avg=25488.00, stdev=751.85, samples=5 01:08:05.939 iops : min= 6044, max= 6502, avg=6372.00, stdev=187.96, samples=5 01:08:05.939 lat (usec) : 250=99.46%, 500=0.49%, 750=0.03%, 1000=0.01% 01:08:05.939 lat (msec) : 2=0.01% 01:08:05.939 cpu : usr=0.84%, sys=5.18%, ctx=18377, majf=0, minf=2 01:08:05.939 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:08:05.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:05.939 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:08:05.939 issued rwts: total=18376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:08:05.939 latency : target=0, window=0, percentile=100.00%, depth=1 01:08:05.939 01:08:05.939 Run status group 0 (all jobs): 01:08:05.939 READ: bw=76.6MiB/s (80.3MB/s), 16.0MiB/s-26.5MiB/s (16.8MB/s-27.8MB/s), io=269MiB (282MB), run=2875-3518msec 01:08:05.939 01:08:05.939 Disk stats (read/write): 01:08:05.939 nvme0n1: ios=21257/0, merge=0/0, ticks=2898/0, in_queue=2898, util=94.92% 01:08:05.939 nvme0n2: ios=14340/0, merge=0/0, ticks=3202/0, in_queue=3202, util=95.28% 01:08:05.939 nvme0n3: ios=11770/0, merge=0/0, ticks=2763/0, in_queue=2763, util=96.54% 01:08:05.939 nvme0n4: ios=18376/0, merge=0/0, ticks=2720/0, in_queue=2720, util=96.57% 01:08:05.939 11:05:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:08:05.939 11:05:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 01:08:06.198 11:05:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:08:06.198 11:05:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 01:08:06.455 11:05:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:08:06.455 11:05:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 01:08:06.713 11:05:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:08:06.713 11:05:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 01:08:06.971 11:05:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:08:06.971 11:05:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 01:08:06.971 11:05:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 01:08:06.971 11:05:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 81063 01:08:06.971 11:05:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 01:08:06.971 11:05:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:08:06.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:08:06.971 11:05:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:08:06.971 11:05:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 01:08:06.971 11:05:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:08:06.971 11:05:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 01:08:07.230 11:05:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:08:07.230 11:05:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 01:08:07.230 nvmf hotplug test: fio failed as expected 01:08:07.230 11:05:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 01:08:07.230 11:05:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 01:08:07.230 11:05:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 01:08:07.230 11:05:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:08:07.230 11:05:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 01:08:07.230 11:05:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 01:08:07.230 11:05:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 01:08:07.230 11:05:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 01:08:07.230 11:05:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 01:08:07.230 11:05:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 01:08:07.230 11:05:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 01:08:07.230 11:05:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:08:07.230 11:05:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 01:08:07.230 11:05:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 01:08:07.230 11:05:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:08:07.230 rmmod nvme_tcp 01:08:07.230 rmmod nvme_fabrics 01:08:07.488 rmmod nvme_keyring 01:08:07.488 11:05:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:08:07.488 11:05:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 01:08:07.488 11:05:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 01:08:07.488 11:05:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 80688 ']' 01:08:07.488 11:05:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 80688 01:08:07.488 11:05:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 80688 ']' 01:08:07.488 11:05:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 80688 01:08:07.488 11:05:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 01:08:07.488 11:05:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:08:07.488 11:05:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80688 01:08:07.488 killing process with pid 80688 01:08:07.488 11:05:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:08:07.488 11:05:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:08:07.489 11:05:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80688' 01:08:07.489 11:05:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 80688 01:08:07.489 11:05:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 80688 01:08:07.489 11:05:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:08:07.489 11:05:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:08:07.489 11:05:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:08:07.489 11:05:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:08:07.489 11:05:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 01:08:07.489 11:05:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:07.489 11:05:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:08:07.489 11:05:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:07.748 11:05:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:08:07.748 01:08:07.748 real 0m18.464s 01:08:07.748 user 1m8.545s 01:08:07.748 sys 0m10.534s 01:08:07.748 11:05:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 01:08:07.748 11:05:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:08:07.748 ************************************ 01:08:07.748 END TEST nvmf_fio_target 01:08:07.748 ************************************ 01:08:07.748 11:05:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:08:07.748 11:05:12 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 01:08:07.748 11:05:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:08:07.748 11:05:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:08:07.748 11:05:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:08:07.748 ************************************ 01:08:07.748 START TEST nvmf_bdevio 01:08:07.748 ************************************ 01:08:07.748 11:05:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 01:08:08.007 * Looking for test storage... 01:08:08.007 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:08:08.007 11:05:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:08:08.007 11:05:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 01:08:08.007 11:05:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:08:08.007 11:05:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:08:08.007 11:05:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:08:08.007 11:05:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:08:08.007 11:05:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:08:08.007 11:05:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:08:08.007 11:05:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:08:08.007 11:05:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:08:08.007 11:05:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:08:08.007 11:05:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:08:08.007 11:05:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:08:08.007 11:05:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:08:08.007 11:05:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:08:08.007 11:05:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:08:08.007 11:05:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:08:08.007 11:05:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:08:08.007 11:05:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:08:08.007 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:08:08.008 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:08:08.008 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:08:08.008 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:08:08.008 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:08:08.008 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:08:08.008 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:08:08.008 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:08:08.008 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:08:08.008 Cannot find device "nvmf_tgt_br" 01:08:08.008 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 01:08:08.008 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:08:08.008 Cannot find device "nvmf_tgt_br2" 01:08:08.008 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 01:08:08.008 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:08:08.008 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:08:08.008 Cannot find device "nvmf_tgt_br" 01:08:08.008 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 01:08:08.008 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:08:08.008 Cannot find device "nvmf_tgt_br2" 01:08:08.008 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 01:08:08.008 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:08:08.008 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:08:08.008 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:08:08.008 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:08:08.008 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 01:08:08.008 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:08:08.008 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:08:08.008 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 01:08:08.008 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:08:08.008 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:08:08.266 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:08:08.266 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:08:08.266 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:08:08.266 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:08:08.266 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:08:08.266 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:08:08.266 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:08:08.266 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:08:08.266 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:08:08.266 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:08:08.266 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:08:08.266 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:08:08.266 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:08:08.266 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:08:08.266 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:08:08.266 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:08:08.266 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:08:08.266 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:08:08.266 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:08:08.266 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:08:08.266 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:08:08.266 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:08:08.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:08:08.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 01:08:08.266 01:08:08.266 --- 10.0.0.2 ping statistics --- 01:08:08.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:08.266 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 01:08:08.266 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:08:08.266 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:08:08.266 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 01:08:08.266 01:08:08.266 --- 10.0.0.3 ping statistics --- 01:08:08.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:08.266 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 01:08:08.266 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:08:08.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:08:08.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 01:08:08.266 01:08:08.266 --- 10.0.0.1 ping statistics --- 01:08:08.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:08.266 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 01:08:08.266 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:08:08.266 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 01:08:08.266 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:08:08.266 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:08:08.266 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:08:08.266 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:08:08.266 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:08:08.266 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:08:08.266 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:08:08.524 11:05:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 01:08:08.524 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:08:08.524 11:05:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 01:08:08.524 11:05:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:08:08.524 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=81365 01:08:08.524 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 01:08:08.524 11:05:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 81365 01:08:08.524 11:05:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 81365 ']' 01:08:08.524 11:05:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:08:08.524 11:05:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 01:08:08.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:08:08.524 11:05:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:08:08.524 11:05:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 01:08:08.524 11:05:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:08:08.524 [2024-07-22 11:05:13.533253] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:08:08.524 [2024-07-22 11:05:13.533747] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:08:08.524 [2024-07-22 11:05:13.669301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:08:08.781 [2024-07-22 11:05:13.743087] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:08:08.781 [2024-07-22 11:05:13.743166] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:08:08.781 [2024-07-22 11:05:13.743183] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:08:08.781 [2024-07-22 11:05:13.743196] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:08:08.781 [2024-07-22 11:05:13.743207] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:08:08.781 [2024-07-22 11:05:13.743435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 01:08:08.782 [2024-07-22 11:05:13.744090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 01:08:08.782 [2024-07-22 11:05:13.744740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 01:08:08.782 [2024-07-22 11:05:13.744746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:08:08.782 [2024-07-22 11:05:13.800085] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:08:09.348 11:05:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:08:09.348 11:05:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 01:08:09.348 11:05:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:08:09.348 11:05:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 01:08:09.348 11:05:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:08:09.348 11:05:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:08:09.348 11:05:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:08:09.348 11:05:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:09.348 11:05:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:08:09.348 [2024-07-22 11:05:14.468747] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:08:09.348 11:05:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:09.348 11:05:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:08:09.348 11:05:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:09.348 11:05:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:08:09.348 Malloc0 01:08:09.348 11:05:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:09.348 11:05:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:08:09.348 11:05:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:09.348 11:05:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:08:09.348 11:05:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:09.348 11:05:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:08:09.348 11:05:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:09.348 11:05:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:08:09.348 11:05:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:09.348 11:05:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:08:09.348 11:05:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:09.348 11:05:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:08:09.348 [2024-07-22 11:05:14.544732] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:08:09.348 11:05:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:09.348 11:05:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 01:08:09.348 11:05:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 01:08:09.348 11:05:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 01:08:09.348 11:05:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 01:08:09.348 11:05:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:08:09.348 11:05:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:08:09.348 { 01:08:09.348 "params": { 01:08:09.348 "name": "Nvme$subsystem", 01:08:09.348 "trtype": "$TEST_TRANSPORT", 01:08:09.348 "traddr": "$NVMF_FIRST_TARGET_IP", 01:08:09.348 "adrfam": "ipv4", 01:08:09.348 "trsvcid": "$NVMF_PORT", 01:08:09.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:08:09.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:08:09.348 "hdgst": ${hdgst:-false}, 01:08:09.348 "ddgst": ${ddgst:-false} 01:08:09.348 }, 01:08:09.348 "method": "bdev_nvme_attach_controller" 01:08:09.348 } 01:08:09.348 EOF 01:08:09.348 )") 01:08:09.605 11:05:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 01:08:09.605 11:05:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 01:08:09.605 11:05:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 01:08:09.605 11:05:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:08:09.605 "params": { 01:08:09.605 "name": "Nvme1", 01:08:09.605 "trtype": "tcp", 01:08:09.605 "traddr": "10.0.0.2", 01:08:09.605 "adrfam": "ipv4", 01:08:09.605 "trsvcid": "4420", 01:08:09.605 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:08:09.605 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:08:09.605 "hdgst": false, 01:08:09.605 "ddgst": false 01:08:09.605 }, 01:08:09.605 "method": "bdev_nvme_attach_controller" 01:08:09.605 }' 01:08:09.605 [2024-07-22 11:05:14.599642] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:08:09.605 [2024-07-22 11:05:14.599719] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81404 ] 01:08:09.605 [2024-07-22 11:05:14.745689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 01:08:09.605 [2024-07-22 11:05:14.796091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:08:09.605 [2024-07-22 11:05:14.796438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:08:09.605 [2024-07-22 11:05:14.796446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:08:09.863 [2024-07-22 11:05:14.847200] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:08:09.863 I/O targets: 01:08:09.863 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 01:08:09.863 01:08:09.863 01:08:09.863 CUnit - A unit testing framework for C - Version 2.1-3 01:08:09.863 http://cunit.sourceforge.net/ 01:08:09.863 01:08:09.863 01:08:09.863 Suite: bdevio tests on: Nvme1n1 01:08:09.863 Test: blockdev write read block ...passed 01:08:09.864 Test: blockdev write zeroes read block ...passed 01:08:09.864 Test: blockdev write zeroes read no split ...passed 01:08:09.864 Test: blockdev write zeroes read split ...passed 01:08:09.864 Test: blockdev write zeroes read split partial ...passed 01:08:09.864 Test: blockdev reset ...[2024-07-22 11:05:14.982835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:08:09.864 [2024-07-22 11:05:14.982938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19592a0 (9): Bad file descriptor 01:08:09.864 [2024-07-22 11:05:15.002432] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 01:08:09.864 passed 01:08:09.864 Test: blockdev write read 8 blocks ...passed 01:08:09.864 Test: blockdev write read size > 128k ...passed 01:08:09.864 Test: blockdev write read invalid size ...passed 01:08:09.864 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:08:09.864 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:08:09.864 Test: blockdev write read max offset ...passed 01:08:09.864 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:08:09.864 Test: blockdev writev readv 8 blocks ...passed 01:08:09.864 Test: blockdev writev readv 30 x 1block ...passed 01:08:09.864 Test: blockdev writev readv block ...passed 01:08:09.864 Test: blockdev writev readv size > 128k ...passed 01:08:09.864 Test: blockdev writev readv size > 128k in two iovs ...passed 01:08:09.864 Test: blockdev comparev and writev ...[2024-07-22 11:05:15.009737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:08:09.864 [2024-07-22 11:05:15.009792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:08:09.864 [2024-07-22 11:05:15.009815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:08:09.864 [2024-07-22 11:05:15.009830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:08:09.864 [2024-07-22 11:05:15.010327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:08:09.864 [2024-07-22 11:05:15.010371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:08:09.864 [2024-07-22 11:05:15.010396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:08:09.864 [2024-07-22 11:05:15.010413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:08:09.864 [2024-07-22 11:05:15.010867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:08:09.864 [2024-07-22 11:05:15.010903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:08:09.864 [2024-07-22 11:05:15.010927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:08:09.864 [2024-07-22 11:05:15.010940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:08:09.864 [2024-07-22 11:05:15.011310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:08:09.864 [2024-07-22 11:05:15.011336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:08:09.864 [2024-07-22 11:05:15.011354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:08:09.864 [2024-07-22 11:05:15.011367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:08:09.864 passed 01:08:09.864 Test: blockdev nvme passthru rw ...passed 01:08:09.864 Test: blockdev nvme passthru vendor specific ...[2024-07-22 11:05:15.012188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:08:09.864 [2024-07-22 11:05:15.012216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:08:09.864 [2024-07-22 11:05:15.012314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:08:09.864 [2024-07-22 11:05:15.012332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:08:09.864 [2024-07-22 11:05:15.012419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:08:09.864 [2024-07-22 11:05:15.012436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:08:09.864 [2024-07-22 11:05:15.012511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:08:09.864 [2024-07-22 11:05:15.012528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:08:09.864 passed 01:08:09.864 Test: blockdev nvme admin passthru ...passed 01:08:09.864 Test: blockdev copy ...passed 01:08:09.864 01:08:09.864 Run Summary: Type Total Ran Passed Failed Inactive 01:08:09.864 suites 1 1 n/a 0 0 01:08:09.864 tests 23 23 23 0 0 01:08:09.864 asserts 152 152 152 0 n/a 01:08:09.864 01:08:09.864 Elapsed time = 0.143 seconds 01:08:10.122 11:05:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:08:10.122 11:05:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:10.122 11:05:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:08:10.122 11:05:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:10.122 11:05:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 01:08:10.122 11:05:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 01:08:10.122 11:05:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 01:08:10.122 11:05:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 01:08:10.122 11:05:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:08:10.122 11:05:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 01:08:10.123 11:05:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 01:08:10.123 11:05:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:08:10.123 rmmod nvme_tcp 01:08:10.123 rmmod nvme_fabrics 01:08:10.123 rmmod nvme_keyring 01:08:10.123 11:05:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:08:10.123 11:05:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 01:08:10.123 11:05:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 01:08:10.123 11:05:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 81365 ']' 01:08:10.123 11:05:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 81365 01:08:10.123 11:05:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 81365 ']' 01:08:10.123 11:05:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 81365 01:08:10.123 11:05:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 01:08:10.123 11:05:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:08:10.123 11:05:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81365 01:08:10.381 11:05:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 01:08:10.381 11:05:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 01:08:10.381 killing process with pid 81365 01:08:10.381 11:05:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81365' 01:08:10.381 11:05:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 81365 01:08:10.382 11:05:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 81365 01:08:10.382 11:05:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:08:10.382 11:05:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:08:10.382 11:05:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:08:10.382 11:05:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:08:10.382 11:05:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 01:08:10.382 11:05:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:10.382 11:05:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:08:10.382 11:05:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:10.382 11:05:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:08:10.641 01:08:10.641 real 0m2.750s 01:08:10.641 user 0m8.254s 01:08:10.641 sys 0m0.902s 01:08:10.641 11:05:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 01:08:10.641 11:05:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:08:10.641 ************************************ 01:08:10.641 END TEST nvmf_bdevio 01:08:10.641 ************************************ 01:08:10.641 11:05:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:08:10.641 11:05:15 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 01:08:10.641 11:05:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:08:10.641 11:05:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:08:10.641 11:05:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:08:10.641 ************************************ 01:08:10.641 START TEST nvmf_auth_target 01:08:10.641 ************************************ 01:08:10.641 11:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 01:08:10.641 * Looking for test storage... 01:08:10.641 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:08:10.641 11:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:08:10.641 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 01:08:10.641 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:08:10.641 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:08:10.641 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:08:10.641 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:08:10.641 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:08:10.641 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:08:10.641 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:08:10.641 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:08:10.641 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:08:10.641 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:08:10.641 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:08:10.641 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:08:10.641 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:08:10.641 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:08:10.641 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:08:10.641 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:08:10.641 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:08:10.641 11:05:15 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:08:10.641 11:05:15 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:08:10.641 11:05:15 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:08:10.641 11:05:15 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:08:10.642 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:08:10.901 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:08:10.901 Cannot find device "nvmf_tgt_br" 01:08:10.901 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 01:08:10.901 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:08:10.901 Cannot find device "nvmf_tgt_br2" 01:08:10.901 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 01:08:10.901 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:08:10.901 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:08:10.901 Cannot find device "nvmf_tgt_br" 01:08:10.901 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 01:08:10.901 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:08:10.901 Cannot find device "nvmf_tgt_br2" 01:08:10.901 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 01:08:10.901 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:08:10.901 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:08:10.901 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:08:10.901 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:08:10.901 11:05:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 01:08:10.901 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:08:10.901 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:08:10.901 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 01:08:10.901 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:08:10.901 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:08:10.901 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:08:10.901 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:08:10.901 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:08:10.901 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:08:11.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:08:11.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 01:08:11.160 01:08:11.160 --- 10.0.0.2 ping statistics --- 01:08:11.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:11.160 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:08:11.160 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:08:11.160 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 01:08:11.160 01:08:11.160 --- 10.0.0.3 ping statistics --- 01:08:11.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:11.160 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:08:11.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:08:11.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 01:08:11.160 01:08:11.160 --- 10.0.0.1 ping statistics --- 01:08:11.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:11.160 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=81584 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 81584 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 81584 ']' 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 01:08:11.160 11:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:12.096 11:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:08:12.096 11:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 01:08:12.096 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:08:12.096 11:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 01:08:12.096 11:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:12.355 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:08:12.355 11:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=81616 01:08:12.355 11:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 01:08:12.355 11:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 01:08:12.355 11:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 01:08:12.355 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 01:08:12.355 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:08:12.355 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 01:08:12.355 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 01:08:12.355 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 01:08:12.355 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 01:08:12.355 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c104a04fe31d3663f9ec68d037044c5f2aee808cfd34c4ce 01:08:12.355 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 01:08:12.355 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.FdH 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c104a04fe31d3663f9ec68d037044c5f2aee808cfd34c4ce 0 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c104a04fe31d3663f9ec68d037044c5f2aee808cfd34c4ce 0 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c104a04fe31d3663f9ec68d037044c5f2aee808cfd34c4ce 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.FdH 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.FdH 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.FdH 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7fdb9a892c0918335d06b8501eed1cd810cadd395b27fa62954dac25943b448b 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.O36 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7fdb9a892c0918335d06b8501eed1cd810cadd395b27fa62954dac25943b448b 3 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7fdb9a892c0918335d06b8501eed1cd810cadd395b27fa62954dac25943b448b 3 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7fdb9a892c0918335d06b8501eed1cd810cadd395b27fa62954dac25943b448b 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.O36 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.O36 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.O36 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b05165f086aa491f7a64a6847a271fd5 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.M4n 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b05165f086aa491f7a64a6847a271fd5 1 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b05165f086aa491f7a64a6847a271fd5 1 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b05165f086aa491f7a64a6847a271fd5 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.M4n 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.M4n 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.M4n 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e037c7943bab7918b528b39df3c69bf4a54286820e84167b 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.yWm 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e037c7943bab7918b528b39df3c69bf4a54286820e84167b 2 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e037c7943bab7918b528b39df3c69bf4a54286820e84167b 2 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e037c7943bab7918b528b39df3c69bf4a54286820e84167b 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 01:08:12.356 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.yWm 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.yWm 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.yWm 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c2a61a6fae566ac8e82e92df1e2f50f91918a699fa7f8c37 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.QZH 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c2a61a6fae566ac8e82e92df1e2f50f91918a699fa7f8c37 2 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c2a61a6fae566ac8e82e92df1e2f50f91918a699fa7f8c37 2 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c2a61a6fae566ac8e82e92df1e2f50f91918a699fa7f8c37 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.QZH 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.QZH 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.QZH 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0e2ad30499d577f523c7863eabbd2fea 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.x3D 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0e2ad30499d577f523c7863eabbd2fea 1 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0e2ad30499d577f523c7863eabbd2fea 1 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0e2ad30499d577f523c7863eabbd2fea 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.x3D 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.x3D 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.x3D 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 01:08:12.615 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 01:08:12.616 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:08:12.616 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 01:08:12.616 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 01:08:12.616 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 01:08:12.616 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 01:08:12.616 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=06aabcabfec4cff9eb91ecea8edce3884199579e80853bb81682d0af1d5005ad 01:08:12.616 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 01:08:12.616 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.RPW 01:08:12.616 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 06aabcabfec4cff9eb91ecea8edce3884199579e80853bb81682d0af1d5005ad 3 01:08:12.616 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 06aabcabfec4cff9eb91ecea8edce3884199579e80853bb81682d0af1d5005ad 3 01:08:12.616 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 01:08:12.616 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:08:12.616 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=06aabcabfec4cff9eb91ecea8edce3884199579e80853bb81682d0af1d5005ad 01:08:12.616 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 01:08:12.616 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 01:08:12.616 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.RPW 01:08:12.874 11:05:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.RPW 01:08:12.874 11:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.RPW 01:08:12.874 11:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 01:08:12.874 11:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 81584 01:08:12.874 11:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 81584 ']' 01:08:12.874 11:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:08:12.874 11:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 01:08:12.874 11:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:08:12.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:08:12.874 11:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 01:08:12.874 11:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:12.874 11:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:08:12.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 01:08:12.874 11:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 01:08:12.874 11:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 81616 /var/tmp/host.sock 01:08:12.874 11:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 81616 ']' 01:08:12.874 11:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 01:08:12.874 11:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 01:08:12.874 11:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 01:08:12.874 11:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 01:08:12.874 11:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:13.133 11:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:08:13.133 11:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 01:08:13.133 11:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 01:08:13.133 11:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:13.133 11:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:13.133 11:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:13.133 11:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 01:08:13.133 11:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.FdH 01:08:13.133 11:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:13.133 11:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:13.133 11:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:13.133 11:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.FdH 01:08:13.133 11:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.FdH 01:08:13.392 11:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.O36 ]] 01:08:13.392 11:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.O36 01:08:13.392 11:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:13.392 11:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:13.392 11:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:13.392 11:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.O36 01:08:13.392 11:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.O36 01:08:13.651 11:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 01:08:13.651 11:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.M4n 01:08:13.651 11:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:13.651 11:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:13.651 11:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:13.651 11:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.M4n 01:08:13.651 11:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.M4n 01:08:13.651 11:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.yWm ]] 01:08:13.651 11:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.yWm 01:08:13.651 11:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:13.651 11:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:13.651 11:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:13.651 11:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.yWm 01:08:13.652 11:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.yWm 01:08:13.940 11:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 01:08:13.940 11:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.QZH 01:08:13.940 11:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:13.940 11:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:13.940 11:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:13.940 11:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.QZH 01:08:13.940 11:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.QZH 01:08:14.219 11:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.x3D ]] 01:08:14.219 11:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.x3D 01:08:14.219 11:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:14.219 11:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:14.219 11:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:14.219 11:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.x3D 01:08:14.219 11:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.x3D 01:08:14.219 11:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 01:08:14.219 11:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.RPW 01:08:14.219 11:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:14.219 11:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:14.219 11:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:14.219 11:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.RPW 01:08:14.219 11:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.RPW 01:08:14.479 11:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 01:08:14.479 11:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 01:08:14.479 11:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:08:14.479 11:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:08:14.479 11:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:08:14.479 11:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:08:14.738 11:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 01:08:14.739 11:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:08:14.739 11:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:08:14.739 11:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 01:08:14.739 11:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:08:14.739 11:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:08:14.739 11:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:08:14.739 11:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:14.739 11:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:14.739 11:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:14.739 11:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:08:14.739 11:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:08:14.997 01:08:14.997 11:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:08:14.997 11:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:08:14.997 11:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:08:15.257 11:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:08:15.257 11:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:08:15.257 11:05:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:15.257 11:05:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:15.257 11:05:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:15.257 11:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:08:15.257 { 01:08:15.257 "cntlid": 1, 01:08:15.257 "qid": 0, 01:08:15.257 "state": "enabled", 01:08:15.257 "thread": "nvmf_tgt_poll_group_000", 01:08:15.257 "listen_address": { 01:08:15.257 "trtype": "TCP", 01:08:15.257 "adrfam": "IPv4", 01:08:15.257 "traddr": "10.0.0.2", 01:08:15.257 "trsvcid": "4420" 01:08:15.257 }, 01:08:15.257 "peer_address": { 01:08:15.257 "trtype": "TCP", 01:08:15.257 "adrfam": "IPv4", 01:08:15.257 "traddr": "10.0.0.1", 01:08:15.257 "trsvcid": "35858" 01:08:15.257 }, 01:08:15.257 "auth": { 01:08:15.257 "state": "completed", 01:08:15.257 "digest": "sha256", 01:08:15.257 "dhgroup": "null" 01:08:15.257 } 01:08:15.257 } 01:08:15.257 ]' 01:08:15.257 11:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:08:15.257 11:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:08:15.257 11:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:08:15.257 11:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 01:08:15.257 11:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:08:15.257 11:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:08:15.257 11:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:08:15.257 11:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:08:15.517 11:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:00:YzEwNGEwNGZlMzFkMzY2M2Y5ZWM2OGQwMzcwNDRjNWYyYWVlODA4Y2ZkMzRjNGNljfoT4w==: --dhchap-ctrl-secret DHHC-1:03:N2ZkYjlhODkyYzA5MTgzMzVkMDZiODUwMWVlZDFjZDgxMGNhZGQzOTViMjdmYTYyOTU0ZGFjMjU5NDNiNDQ4YiUHx6Y=: 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:08:19.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:08:19.720 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:08:19.720 { 01:08:19.720 "cntlid": 3, 01:08:19.720 "qid": 0, 01:08:19.720 "state": "enabled", 01:08:19.720 "thread": "nvmf_tgt_poll_group_000", 01:08:19.720 "listen_address": { 01:08:19.720 "trtype": "TCP", 01:08:19.720 "adrfam": "IPv4", 01:08:19.720 "traddr": "10.0.0.2", 01:08:19.720 "trsvcid": "4420" 01:08:19.720 }, 01:08:19.720 "peer_address": { 01:08:19.720 "trtype": "TCP", 01:08:19.720 "adrfam": "IPv4", 01:08:19.720 "traddr": "10.0.0.1", 01:08:19.720 "trsvcid": "58962" 01:08:19.720 }, 01:08:19.720 "auth": { 01:08:19.720 "state": "completed", 01:08:19.720 "digest": "sha256", 01:08:19.720 "dhgroup": "null" 01:08:19.720 } 01:08:19.720 } 01:08:19.720 ]' 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:08:19.720 11:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:08:19.721 11:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:08:19.979 11:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:01:YjA1MTY1ZjA4NmFhNDkxZjdhNjRhNjg0N2EyNzFmZDUvk5EB: --dhchap-ctrl-secret DHHC-1:02:ZTAzN2M3OTQzYmFiNzkxOGI1MjhiMzlkZjNjNjliZjRhNTQyODY4MjBlODQxNjdi3CcgFw==: 01:08:20.545 11:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:08:20.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:08:20.545 11:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:08:20.545 11:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:20.545 11:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:20.545 11:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:20.545 11:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:08:20.545 11:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:08:20.545 11:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:08:20.804 11:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 01:08:20.804 11:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:08:20.804 11:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:08:20.804 11:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 01:08:20.804 11:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:08:20.804 11:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:08:20.804 11:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:08:20.804 11:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:20.804 11:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:20.804 11:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:20.804 11:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:08:20.804 11:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:08:21.071 01:08:21.071 11:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:08:21.071 11:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:08:21.071 11:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:08:21.353 11:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:08:21.354 11:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:08:21.354 11:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:21.354 11:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:21.354 11:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:21.354 11:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:08:21.354 { 01:08:21.354 "cntlid": 5, 01:08:21.354 "qid": 0, 01:08:21.354 "state": "enabled", 01:08:21.354 "thread": "nvmf_tgt_poll_group_000", 01:08:21.354 "listen_address": { 01:08:21.354 "trtype": "TCP", 01:08:21.354 "adrfam": "IPv4", 01:08:21.354 "traddr": "10.0.0.2", 01:08:21.354 "trsvcid": "4420" 01:08:21.354 }, 01:08:21.354 "peer_address": { 01:08:21.354 "trtype": "TCP", 01:08:21.354 "adrfam": "IPv4", 01:08:21.354 "traddr": "10.0.0.1", 01:08:21.354 "trsvcid": "59006" 01:08:21.354 }, 01:08:21.354 "auth": { 01:08:21.354 "state": "completed", 01:08:21.354 "digest": "sha256", 01:08:21.354 "dhgroup": "null" 01:08:21.354 } 01:08:21.354 } 01:08:21.354 ]' 01:08:21.354 11:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:08:21.354 11:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:08:21.354 11:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:08:21.354 11:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 01:08:21.354 11:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:08:21.354 11:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:08:21.354 11:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:08:21.354 11:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:08:21.613 11:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:02:YzJhNjFhNmZhZTU2NmFjOGU4MmU5MmRmMWUyZjUwZjkxOTE4YTY5OWZhN2Y4YzM3UDoDgg==: --dhchap-ctrl-secret DHHC-1:01:MGUyYWQzMDQ5OWQ1NzdmNTIzYzc4NjNlYWJiZDJmZWGEJNYt: 01:08:22.182 11:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:08:22.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:08:22.182 11:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:08:22.182 11:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:22.182 11:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:22.182 11:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:22.182 11:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:08:22.182 11:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:08:22.182 11:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:08:22.441 11:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 01:08:22.441 11:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:08:22.441 11:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:08:22.441 11:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 01:08:22.441 11:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:08:22.441 11:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:08:22.441 11:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key3 01:08:22.441 11:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:22.441 11:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:22.441 11:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:22.441 11:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:08:22.441 11:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:08:22.701 01:08:22.701 11:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:08:22.701 11:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:08:22.701 11:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:08:22.961 11:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:08:22.961 11:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:08:22.961 11:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:22.961 11:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:22.961 11:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:22.961 11:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:08:22.961 { 01:08:22.961 "cntlid": 7, 01:08:22.961 "qid": 0, 01:08:22.961 "state": "enabled", 01:08:22.961 "thread": "nvmf_tgt_poll_group_000", 01:08:22.961 "listen_address": { 01:08:22.961 "trtype": "TCP", 01:08:22.961 "adrfam": "IPv4", 01:08:22.961 "traddr": "10.0.0.2", 01:08:22.961 "trsvcid": "4420" 01:08:22.961 }, 01:08:22.961 "peer_address": { 01:08:22.961 "trtype": "TCP", 01:08:22.961 "adrfam": "IPv4", 01:08:22.961 "traddr": "10.0.0.1", 01:08:22.961 "trsvcid": "59022" 01:08:22.961 }, 01:08:22.961 "auth": { 01:08:22.961 "state": "completed", 01:08:22.961 "digest": "sha256", 01:08:22.961 "dhgroup": "null" 01:08:22.961 } 01:08:22.961 } 01:08:22.961 ]' 01:08:22.961 11:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:08:22.961 11:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:08:22.961 11:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:08:22.961 11:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 01:08:22.961 11:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:08:22.961 11:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:08:22.961 11:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:08:22.961 11:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:08:23.220 11:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:03:MDZhYWJjYWJmZWM0Y2ZmOWViOTFlY2VhOGVkY2UzODg0MTk5NTc5ZTgwODUzYmI4MTY4MmQwYWYxZDUwMDVhZDpm5s4=: 01:08:23.802 11:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:08:23.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:08:23.802 11:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:08:23.802 11:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:23.802 11:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:23.802 11:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:23.802 11:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:08:23.802 11:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:08:23.802 11:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:08:23.802 11:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:08:24.061 11:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 01:08:24.061 11:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:08:24.061 11:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:08:24.061 11:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 01:08:24.061 11:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:08:24.061 11:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:08:24.061 11:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:08:24.061 11:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:24.061 11:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:24.061 11:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:24.061 11:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:08:24.061 11:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:08:24.319 01:08:24.319 11:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:08:24.319 11:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:08:24.319 11:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:08:24.319 11:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:08:24.319 11:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:08:24.319 11:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:24.319 11:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:24.578 11:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:24.578 11:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:08:24.578 { 01:08:24.578 "cntlid": 9, 01:08:24.578 "qid": 0, 01:08:24.578 "state": "enabled", 01:08:24.578 "thread": "nvmf_tgt_poll_group_000", 01:08:24.578 "listen_address": { 01:08:24.578 "trtype": "TCP", 01:08:24.578 "adrfam": "IPv4", 01:08:24.578 "traddr": "10.0.0.2", 01:08:24.578 "trsvcid": "4420" 01:08:24.578 }, 01:08:24.578 "peer_address": { 01:08:24.578 "trtype": "TCP", 01:08:24.578 "adrfam": "IPv4", 01:08:24.578 "traddr": "10.0.0.1", 01:08:24.578 "trsvcid": "59050" 01:08:24.578 }, 01:08:24.578 "auth": { 01:08:24.578 "state": "completed", 01:08:24.578 "digest": "sha256", 01:08:24.578 "dhgroup": "ffdhe2048" 01:08:24.578 } 01:08:24.578 } 01:08:24.578 ]' 01:08:24.578 11:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:08:24.578 11:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:08:24.578 11:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:08:24.578 11:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:08:24.578 11:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:08:24.578 11:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:08:24.578 11:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:08:24.578 11:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:08:24.837 11:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:00:YzEwNGEwNGZlMzFkMzY2M2Y5ZWM2OGQwMzcwNDRjNWYyYWVlODA4Y2ZkMzRjNGNljfoT4w==: --dhchap-ctrl-secret DHHC-1:03:N2ZkYjlhODkyYzA5MTgzMzVkMDZiODUwMWVlZDFjZDgxMGNhZGQzOTViMjdmYTYyOTU0ZGFjMjU5NDNiNDQ4YiUHx6Y=: 01:08:25.403 11:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:08:25.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:08:25.403 11:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:08:25.403 11:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:25.403 11:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:25.403 11:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:25.404 11:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:08:25.404 11:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:08:25.404 11:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:08:25.663 11:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 01:08:25.663 11:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:08:25.663 11:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:08:25.663 11:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 01:08:25.663 11:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:08:25.663 11:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:08:25.663 11:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:08:25.663 11:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:25.663 11:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:25.663 11:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:25.663 11:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:08:25.663 11:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:08:25.663 01:08:25.922 11:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:08:25.922 11:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:08:25.922 11:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:08:25.922 11:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:08:25.922 11:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:08:25.922 11:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:25.922 11:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:25.922 11:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:25.922 11:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:08:25.922 { 01:08:25.922 "cntlid": 11, 01:08:25.922 "qid": 0, 01:08:25.922 "state": "enabled", 01:08:25.922 "thread": "nvmf_tgt_poll_group_000", 01:08:25.922 "listen_address": { 01:08:25.922 "trtype": "TCP", 01:08:25.922 "adrfam": "IPv4", 01:08:25.922 "traddr": "10.0.0.2", 01:08:25.922 "trsvcid": "4420" 01:08:25.922 }, 01:08:25.922 "peer_address": { 01:08:25.922 "trtype": "TCP", 01:08:25.922 "adrfam": "IPv4", 01:08:25.922 "traddr": "10.0.0.1", 01:08:25.922 "trsvcid": "59088" 01:08:25.922 }, 01:08:25.922 "auth": { 01:08:25.922 "state": "completed", 01:08:25.922 "digest": "sha256", 01:08:25.922 "dhgroup": "ffdhe2048" 01:08:25.922 } 01:08:25.922 } 01:08:25.922 ]' 01:08:25.922 11:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:08:26.180 11:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:08:26.180 11:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:08:26.180 11:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:08:26.180 11:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:08:26.180 11:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:08:26.180 11:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:08:26.180 11:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:08:26.439 11:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:01:YjA1MTY1ZjA4NmFhNDkxZjdhNjRhNjg0N2EyNzFmZDUvk5EB: --dhchap-ctrl-secret DHHC-1:02:ZTAzN2M3OTQzYmFiNzkxOGI1MjhiMzlkZjNjNjliZjRhNTQyODY4MjBlODQxNjdi3CcgFw==: 01:08:27.004 11:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:08:27.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:08:27.004 11:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:08:27.004 11:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:27.004 11:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:27.004 11:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:27.004 11:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:08:27.004 11:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:08:27.004 11:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:08:27.267 11:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 01:08:27.267 11:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:08:27.267 11:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:08:27.267 11:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 01:08:27.267 11:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:08:27.267 11:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:08:27.267 11:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:08:27.267 11:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:27.267 11:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:27.267 11:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:27.267 11:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:08:27.267 11:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:08:27.523 01:08:27.523 11:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:08:27.523 11:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:08:27.523 11:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:08:27.523 11:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:08:27.523 11:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:08:27.523 11:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:27.523 11:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:27.781 11:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:27.781 11:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:08:27.781 { 01:08:27.781 "cntlid": 13, 01:08:27.781 "qid": 0, 01:08:27.781 "state": "enabled", 01:08:27.781 "thread": "nvmf_tgt_poll_group_000", 01:08:27.781 "listen_address": { 01:08:27.781 "trtype": "TCP", 01:08:27.781 "adrfam": "IPv4", 01:08:27.781 "traddr": "10.0.0.2", 01:08:27.781 "trsvcid": "4420" 01:08:27.781 }, 01:08:27.781 "peer_address": { 01:08:27.781 "trtype": "TCP", 01:08:27.781 "adrfam": "IPv4", 01:08:27.781 "traddr": "10.0.0.1", 01:08:27.781 "trsvcid": "59110" 01:08:27.781 }, 01:08:27.781 "auth": { 01:08:27.781 "state": "completed", 01:08:27.781 "digest": "sha256", 01:08:27.781 "dhgroup": "ffdhe2048" 01:08:27.781 } 01:08:27.781 } 01:08:27.781 ]' 01:08:27.781 11:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:08:27.781 11:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:08:27.781 11:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:08:27.781 11:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:08:27.781 11:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:08:27.781 11:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:08:27.781 11:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:08:27.781 11:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:08:28.038 11:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:02:YzJhNjFhNmZhZTU2NmFjOGU4MmU5MmRmMWUyZjUwZjkxOTE4YTY5OWZhN2Y4YzM3UDoDgg==: --dhchap-ctrl-secret DHHC-1:01:MGUyYWQzMDQ5OWQ1NzdmNTIzYzc4NjNlYWJiZDJmZWGEJNYt: 01:08:28.605 11:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:08:28.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:08:28.605 11:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:08:28.605 11:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:28.605 11:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:28.605 11:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:28.605 11:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:08:28.605 11:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:08:28.605 11:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:08:28.863 11:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 01:08:28.863 11:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:08:28.863 11:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:08:28.863 11:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 01:08:28.863 11:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:08:28.863 11:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:08:28.863 11:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key3 01:08:28.863 11:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:28.863 11:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:28.863 11:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:28.863 11:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:08:28.863 11:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:08:29.122 01:08:29.122 11:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:08:29.122 11:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:08:29.122 11:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:08:29.122 11:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:08:29.122 11:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:08:29.122 11:05:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:29.122 11:05:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:29.122 11:05:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:29.122 11:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:08:29.122 { 01:08:29.122 "cntlid": 15, 01:08:29.122 "qid": 0, 01:08:29.122 "state": "enabled", 01:08:29.122 "thread": "nvmf_tgt_poll_group_000", 01:08:29.122 "listen_address": { 01:08:29.122 "trtype": "TCP", 01:08:29.122 "adrfam": "IPv4", 01:08:29.122 "traddr": "10.0.0.2", 01:08:29.122 "trsvcid": "4420" 01:08:29.122 }, 01:08:29.122 "peer_address": { 01:08:29.122 "trtype": "TCP", 01:08:29.122 "adrfam": "IPv4", 01:08:29.122 "traddr": "10.0.0.1", 01:08:29.122 "trsvcid": "59132" 01:08:29.122 }, 01:08:29.122 "auth": { 01:08:29.122 "state": "completed", 01:08:29.122 "digest": "sha256", 01:08:29.122 "dhgroup": "ffdhe2048" 01:08:29.122 } 01:08:29.122 } 01:08:29.122 ]' 01:08:29.122 11:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:08:29.381 11:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:08:29.381 11:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:08:29.381 11:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:08:29.381 11:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:08:29.381 11:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:08:29.381 11:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:08:29.381 11:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:08:29.639 11:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:03:MDZhYWJjYWJmZWM0Y2ZmOWViOTFlY2VhOGVkY2UzODg0MTk5NTc5ZTgwODUzYmI4MTY4MmQwYWYxZDUwMDVhZDpm5s4=: 01:08:30.207 11:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:08:30.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:08:30.207 11:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:08:30.207 11:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:30.207 11:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:30.207 11:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:30.207 11:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:08:30.207 11:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:08:30.207 11:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:08:30.207 11:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:08:30.207 11:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 01:08:30.207 11:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:08:30.207 11:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:08:30.207 11:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 01:08:30.207 11:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:08:30.207 11:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:08:30.207 11:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:08:30.207 11:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:30.207 11:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:30.207 11:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:30.207 11:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:08:30.207 11:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:08:30.465 01:08:30.465 11:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:08:30.465 11:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:08:30.465 11:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:08:30.723 11:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:08:30.723 11:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:08:30.723 11:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:30.723 11:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:30.723 11:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:30.723 11:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:08:30.723 { 01:08:30.723 "cntlid": 17, 01:08:30.723 "qid": 0, 01:08:30.724 "state": "enabled", 01:08:30.724 "thread": "nvmf_tgt_poll_group_000", 01:08:30.724 "listen_address": { 01:08:30.724 "trtype": "TCP", 01:08:30.724 "adrfam": "IPv4", 01:08:30.724 "traddr": "10.0.0.2", 01:08:30.724 "trsvcid": "4420" 01:08:30.724 }, 01:08:30.724 "peer_address": { 01:08:30.724 "trtype": "TCP", 01:08:30.724 "adrfam": "IPv4", 01:08:30.724 "traddr": "10.0.0.1", 01:08:30.724 "trsvcid": "47218" 01:08:30.724 }, 01:08:30.724 "auth": { 01:08:30.724 "state": "completed", 01:08:30.724 "digest": "sha256", 01:08:30.724 "dhgroup": "ffdhe3072" 01:08:30.724 } 01:08:30.724 } 01:08:30.724 ]' 01:08:30.724 11:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:08:30.982 11:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:08:30.982 11:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:08:30.982 11:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:08:30.982 11:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:08:30.982 11:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:08:30.982 11:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:08:30.982 11:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:08:31.240 11:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:00:YzEwNGEwNGZlMzFkMzY2M2Y5ZWM2OGQwMzcwNDRjNWYyYWVlODA4Y2ZkMzRjNGNljfoT4w==: --dhchap-ctrl-secret DHHC-1:03:N2ZkYjlhODkyYzA5MTgzMzVkMDZiODUwMWVlZDFjZDgxMGNhZGQzOTViMjdmYTYyOTU0ZGFjMjU5NDNiNDQ4YiUHx6Y=: 01:08:31.807 11:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:08:31.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:08:31.807 11:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:08:31.807 11:05:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:31.807 11:05:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:31.807 11:05:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:31.807 11:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:08:31.807 11:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:08:31.807 11:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:08:31.807 11:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 01:08:31.807 11:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:08:31.807 11:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:08:31.807 11:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 01:08:31.807 11:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:08:31.807 11:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:08:31.807 11:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:08:31.807 11:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:31.807 11:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:32.071 11:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:32.071 11:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:08:32.071 11:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:08:32.071 01:08:32.346 11:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:08:32.346 11:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:08:32.346 11:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:08:32.346 11:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:08:32.346 11:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:08:32.346 11:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:32.346 11:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:32.346 11:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:32.347 11:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:08:32.347 { 01:08:32.347 "cntlid": 19, 01:08:32.347 "qid": 0, 01:08:32.347 "state": "enabled", 01:08:32.347 "thread": "nvmf_tgt_poll_group_000", 01:08:32.347 "listen_address": { 01:08:32.347 "trtype": "TCP", 01:08:32.347 "adrfam": "IPv4", 01:08:32.347 "traddr": "10.0.0.2", 01:08:32.347 "trsvcid": "4420" 01:08:32.347 }, 01:08:32.347 "peer_address": { 01:08:32.347 "trtype": "TCP", 01:08:32.347 "adrfam": "IPv4", 01:08:32.347 "traddr": "10.0.0.1", 01:08:32.347 "trsvcid": "47242" 01:08:32.347 }, 01:08:32.347 "auth": { 01:08:32.347 "state": "completed", 01:08:32.347 "digest": "sha256", 01:08:32.347 "dhgroup": "ffdhe3072" 01:08:32.347 } 01:08:32.347 } 01:08:32.347 ]' 01:08:32.347 11:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:08:32.347 11:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:08:32.347 11:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:08:32.605 11:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:08:32.605 11:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:08:32.605 11:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:08:32.605 11:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:08:32.605 11:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:08:32.605 11:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:01:YjA1MTY1ZjA4NmFhNDkxZjdhNjRhNjg0N2EyNzFmZDUvk5EB: --dhchap-ctrl-secret DHHC-1:02:ZTAzN2M3OTQzYmFiNzkxOGI1MjhiMzlkZjNjNjliZjRhNTQyODY4MjBlODQxNjdi3CcgFw==: 01:08:33.540 11:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:08:33.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:08:33.540 11:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:08:33.540 11:05:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:33.540 11:05:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:33.540 11:05:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:33.540 11:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:08:33.540 11:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:08:33.540 11:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:08:33.540 11:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 01:08:33.540 11:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:08:33.540 11:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:08:33.540 11:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 01:08:33.540 11:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:08:33.540 11:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:08:33.540 11:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:08:33.540 11:05:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:33.540 11:05:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:33.540 11:05:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:33.540 11:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:08:33.540 11:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:08:33.799 01:08:33.799 11:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:08:33.799 11:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:08:33.799 11:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:08:34.057 11:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:08:34.057 11:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:08:34.057 11:05:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:34.057 11:05:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:34.057 11:05:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:34.057 11:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:08:34.057 { 01:08:34.057 "cntlid": 21, 01:08:34.057 "qid": 0, 01:08:34.057 "state": "enabled", 01:08:34.057 "thread": "nvmf_tgt_poll_group_000", 01:08:34.057 "listen_address": { 01:08:34.057 "trtype": "TCP", 01:08:34.057 "adrfam": "IPv4", 01:08:34.057 "traddr": "10.0.0.2", 01:08:34.057 "trsvcid": "4420" 01:08:34.057 }, 01:08:34.057 "peer_address": { 01:08:34.057 "trtype": "TCP", 01:08:34.057 "adrfam": "IPv4", 01:08:34.057 "traddr": "10.0.0.1", 01:08:34.057 "trsvcid": "47262" 01:08:34.057 }, 01:08:34.057 "auth": { 01:08:34.057 "state": "completed", 01:08:34.057 "digest": "sha256", 01:08:34.057 "dhgroup": "ffdhe3072" 01:08:34.057 } 01:08:34.057 } 01:08:34.057 ]' 01:08:34.057 11:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:08:34.057 11:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:08:34.057 11:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:08:34.057 11:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:08:34.057 11:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:08:34.314 11:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:08:34.314 11:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:08:34.314 11:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:08:34.314 11:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:02:YzJhNjFhNmZhZTU2NmFjOGU4MmU5MmRmMWUyZjUwZjkxOTE4YTY5OWZhN2Y4YzM3UDoDgg==: --dhchap-ctrl-secret DHHC-1:01:MGUyYWQzMDQ5OWQ1NzdmNTIzYzc4NjNlYWJiZDJmZWGEJNYt: 01:08:35.249 11:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:08:35.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:08:35.249 11:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:08:35.249 11:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:35.249 11:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:35.249 11:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:35.249 11:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:08:35.249 11:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:08:35.249 11:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:08:35.249 11:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 01:08:35.249 11:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:08:35.249 11:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:08:35.249 11:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 01:08:35.249 11:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:08:35.249 11:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:08:35.249 11:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key3 01:08:35.249 11:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:35.249 11:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:35.249 11:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:35.249 11:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:08:35.250 11:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:08:35.508 01:08:35.508 11:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:08:35.508 11:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:08:35.508 11:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:08:35.767 11:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:08:35.767 11:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:08:35.767 11:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:35.767 11:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:35.767 11:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:35.767 11:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:08:35.767 { 01:08:35.767 "cntlid": 23, 01:08:35.767 "qid": 0, 01:08:35.767 "state": "enabled", 01:08:35.767 "thread": "nvmf_tgt_poll_group_000", 01:08:35.767 "listen_address": { 01:08:35.767 "trtype": "TCP", 01:08:35.767 "adrfam": "IPv4", 01:08:35.767 "traddr": "10.0.0.2", 01:08:35.767 "trsvcid": "4420" 01:08:35.767 }, 01:08:35.767 "peer_address": { 01:08:35.767 "trtype": "TCP", 01:08:35.767 "adrfam": "IPv4", 01:08:35.767 "traddr": "10.0.0.1", 01:08:35.767 "trsvcid": "47292" 01:08:35.767 }, 01:08:35.767 "auth": { 01:08:35.767 "state": "completed", 01:08:35.767 "digest": "sha256", 01:08:35.767 "dhgroup": "ffdhe3072" 01:08:35.767 } 01:08:35.767 } 01:08:35.767 ]' 01:08:35.767 11:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:08:35.767 11:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:08:35.767 11:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:08:35.767 11:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:08:35.767 11:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:08:35.767 11:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:08:35.767 11:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:08:35.767 11:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:08:36.026 11:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:03:MDZhYWJjYWJmZWM0Y2ZmOWViOTFlY2VhOGVkY2UzODg0MTk5NTc5ZTgwODUzYmI4MTY4MmQwYWYxZDUwMDVhZDpm5s4=: 01:08:36.594 11:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:08:36.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:08:36.594 11:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:08:36.594 11:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:36.594 11:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:36.594 11:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:36.594 11:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:08:36.594 11:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:08:36.594 11:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:08:36.594 11:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:08:36.853 11:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 01:08:36.853 11:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:08:36.853 11:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:08:36.853 11:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 01:08:36.853 11:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:08:36.853 11:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:08:36.853 11:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:08:36.853 11:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:36.853 11:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:36.853 11:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:36.853 11:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:08:36.853 11:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:08:37.111 01:08:37.111 11:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:08:37.111 11:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:08:37.111 11:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:08:37.370 11:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:08:37.370 11:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:08:37.370 11:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:37.370 11:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:37.370 11:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:37.370 11:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:08:37.370 { 01:08:37.370 "cntlid": 25, 01:08:37.370 "qid": 0, 01:08:37.370 "state": "enabled", 01:08:37.370 "thread": "nvmf_tgt_poll_group_000", 01:08:37.370 "listen_address": { 01:08:37.370 "trtype": "TCP", 01:08:37.370 "adrfam": "IPv4", 01:08:37.370 "traddr": "10.0.0.2", 01:08:37.370 "trsvcid": "4420" 01:08:37.370 }, 01:08:37.370 "peer_address": { 01:08:37.370 "trtype": "TCP", 01:08:37.370 "adrfam": "IPv4", 01:08:37.370 "traddr": "10.0.0.1", 01:08:37.370 "trsvcid": "47312" 01:08:37.370 }, 01:08:37.370 "auth": { 01:08:37.370 "state": "completed", 01:08:37.370 "digest": "sha256", 01:08:37.370 "dhgroup": "ffdhe4096" 01:08:37.370 } 01:08:37.370 } 01:08:37.370 ]' 01:08:37.370 11:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:08:37.370 11:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:08:37.370 11:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:08:37.370 11:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:08:37.370 11:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:08:37.629 11:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:08:37.629 11:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:08:37.629 11:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:08:37.629 11:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:00:YzEwNGEwNGZlMzFkMzY2M2Y5ZWM2OGQwMzcwNDRjNWYyYWVlODA4Y2ZkMzRjNGNljfoT4w==: --dhchap-ctrl-secret DHHC-1:03:N2ZkYjlhODkyYzA5MTgzMzVkMDZiODUwMWVlZDFjZDgxMGNhZGQzOTViMjdmYTYyOTU0ZGFjMjU5NDNiNDQ4YiUHx6Y=: 01:08:38.195 11:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:08:38.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:08:38.195 11:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:08:38.195 11:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:38.195 11:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:38.195 11:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:38.195 11:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:08:38.195 11:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:08:38.195 11:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:08:38.453 11:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 01:08:38.453 11:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:08:38.453 11:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:08:38.453 11:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 01:08:38.453 11:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:08:38.453 11:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:08:38.453 11:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:08:38.453 11:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:38.453 11:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:38.453 11:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:38.453 11:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:08:38.453 11:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:08:38.710 01:08:38.710 11:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:08:38.710 11:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:08:38.710 11:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:08:38.968 11:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:08:38.968 11:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:08:38.968 11:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:38.968 11:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:38.968 11:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:38.968 11:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:08:38.968 { 01:08:38.968 "cntlid": 27, 01:08:38.968 "qid": 0, 01:08:38.968 "state": "enabled", 01:08:38.968 "thread": "nvmf_tgt_poll_group_000", 01:08:38.968 "listen_address": { 01:08:38.968 "trtype": "TCP", 01:08:38.968 "adrfam": "IPv4", 01:08:38.968 "traddr": "10.0.0.2", 01:08:38.968 "trsvcid": "4420" 01:08:38.968 }, 01:08:38.968 "peer_address": { 01:08:38.968 "trtype": "TCP", 01:08:38.968 "adrfam": "IPv4", 01:08:38.968 "traddr": "10.0.0.1", 01:08:38.968 "trsvcid": "47340" 01:08:38.968 }, 01:08:38.968 "auth": { 01:08:38.968 "state": "completed", 01:08:38.968 "digest": "sha256", 01:08:38.968 "dhgroup": "ffdhe4096" 01:08:38.968 } 01:08:38.968 } 01:08:38.968 ]' 01:08:38.968 11:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:08:38.968 11:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:08:38.968 11:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:08:39.226 11:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:08:39.226 11:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:08:39.226 11:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:08:39.226 11:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:08:39.226 11:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:08:39.483 11:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:01:YjA1MTY1ZjA4NmFhNDkxZjdhNjRhNjg0N2EyNzFmZDUvk5EB: --dhchap-ctrl-secret DHHC-1:02:ZTAzN2M3OTQzYmFiNzkxOGI1MjhiMzlkZjNjNjliZjRhNTQyODY4MjBlODQxNjdi3CcgFw==: 01:08:40.049 11:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:08:40.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:08:40.049 11:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:08:40.049 11:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:40.049 11:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:40.049 11:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:40.049 11:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:08:40.049 11:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:08:40.049 11:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:08:40.308 11:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 01:08:40.308 11:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:08:40.308 11:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:08:40.308 11:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 01:08:40.308 11:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:08:40.308 11:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:08:40.308 11:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:08:40.308 11:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:40.308 11:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:40.308 11:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:40.308 11:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:08:40.308 11:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:08:40.568 01:08:40.568 11:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:08:40.568 11:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:08:40.568 11:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:08:40.827 11:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:08:40.827 11:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:08:40.827 11:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:40.827 11:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:40.827 11:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:40.827 11:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:08:40.827 { 01:08:40.827 "cntlid": 29, 01:08:40.827 "qid": 0, 01:08:40.827 "state": "enabled", 01:08:40.827 "thread": "nvmf_tgt_poll_group_000", 01:08:40.827 "listen_address": { 01:08:40.827 "trtype": "TCP", 01:08:40.827 "adrfam": "IPv4", 01:08:40.827 "traddr": "10.0.0.2", 01:08:40.827 "trsvcid": "4420" 01:08:40.827 }, 01:08:40.827 "peer_address": { 01:08:40.827 "trtype": "TCP", 01:08:40.827 "adrfam": "IPv4", 01:08:40.827 "traddr": "10.0.0.1", 01:08:40.827 "trsvcid": "37940" 01:08:40.827 }, 01:08:40.827 "auth": { 01:08:40.827 "state": "completed", 01:08:40.827 "digest": "sha256", 01:08:40.827 "dhgroup": "ffdhe4096" 01:08:40.827 } 01:08:40.827 } 01:08:40.827 ]' 01:08:40.827 11:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:08:40.827 11:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:08:40.827 11:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:08:40.827 11:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:08:40.827 11:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:08:40.827 11:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:08:40.827 11:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:08:40.827 11:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:08:41.085 11:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:02:YzJhNjFhNmZhZTU2NmFjOGU4MmU5MmRmMWUyZjUwZjkxOTE4YTY5OWZhN2Y4YzM3UDoDgg==: --dhchap-ctrl-secret DHHC-1:01:MGUyYWQzMDQ5OWQ1NzdmNTIzYzc4NjNlYWJiZDJmZWGEJNYt: 01:08:41.653 11:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:08:41.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:08:41.653 11:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:08:41.653 11:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:41.653 11:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:41.653 11:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:41.653 11:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:08:41.654 11:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:08:41.654 11:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:08:41.912 11:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 01:08:41.912 11:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:08:41.912 11:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:08:41.912 11:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 01:08:41.912 11:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:08:41.912 11:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:08:41.912 11:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key3 01:08:41.912 11:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:41.912 11:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:41.912 11:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:41.912 11:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:08:41.912 11:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:08:42.170 01:08:42.170 11:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:08:42.170 11:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:08:42.170 11:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:08:42.427 11:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:08:42.428 11:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:08:42.428 11:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:42.428 11:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:42.428 11:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:42.428 11:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:08:42.428 { 01:08:42.428 "cntlid": 31, 01:08:42.428 "qid": 0, 01:08:42.428 "state": "enabled", 01:08:42.428 "thread": "nvmf_tgt_poll_group_000", 01:08:42.428 "listen_address": { 01:08:42.428 "trtype": "TCP", 01:08:42.428 "adrfam": "IPv4", 01:08:42.428 "traddr": "10.0.0.2", 01:08:42.428 "trsvcid": "4420" 01:08:42.428 }, 01:08:42.428 "peer_address": { 01:08:42.428 "trtype": "TCP", 01:08:42.428 "adrfam": "IPv4", 01:08:42.428 "traddr": "10.0.0.1", 01:08:42.428 "trsvcid": "37966" 01:08:42.428 }, 01:08:42.428 "auth": { 01:08:42.428 "state": "completed", 01:08:42.428 "digest": "sha256", 01:08:42.428 "dhgroup": "ffdhe4096" 01:08:42.428 } 01:08:42.428 } 01:08:42.428 ]' 01:08:42.428 11:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:08:42.428 11:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:08:42.428 11:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:08:42.428 11:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:08:42.428 11:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:08:42.428 11:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:08:42.428 11:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:08:42.428 11:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:08:42.684 11:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:03:MDZhYWJjYWJmZWM0Y2ZmOWViOTFlY2VhOGVkY2UzODg0MTk5NTc5ZTgwODUzYmI4MTY4MmQwYWYxZDUwMDVhZDpm5s4=: 01:08:43.250 11:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:08:43.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:08:43.250 11:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:08:43.250 11:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:43.250 11:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:43.250 11:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:43.250 11:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:08:43.250 11:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:08:43.250 11:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:08:43.250 11:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:08:43.566 11:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 01:08:43.566 11:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:08:43.566 11:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:08:43.566 11:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 01:08:43.566 11:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:08:43.566 11:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:08:43.566 11:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:08:43.566 11:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:43.566 11:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:43.566 11:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:43.566 11:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:08:43.566 11:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:08:43.825 01:08:44.083 11:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:08:44.083 11:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:08:44.083 11:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:08:44.341 11:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:08:44.342 11:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:08:44.342 11:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:44.342 11:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:44.342 11:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:44.342 11:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:08:44.342 { 01:08:44.342 "cntlid": 33, 01:08:44.342 "qid": 0, 01:08:44.342 "state": "enabled", 01:08:44.342 "thread": "nvmf_tgt_poll_group_000", 01:08:44.342 "listen_address": { 01:08:44.342 "trtype": "TCP", 01:08:44.342 "adrfam": "IPv4", 01:08:44.342 "traddr": "10.0.0.2", 01:08:44.342 "trsvcid": "4420" 01:08:44.342 }, 01:08:44.342 "peer_address": { 01:08:44.342 "trtype": "TCP", 01:08:44.342 "adrfam": "IPv4", 01:08:44.342 "traddr": "10.0.0.1", 01:08:44.342 "trsvcid": "38000" 01:08:44.342 }, 01:08:44.342 "auth": { 01:08:44.342 "state": "completed", 01:08:44.342 "digest": "sha256", 01:08:44.342 "dhgroup": "ffdhe6144" 01:08:44.342 } 01:08:44.342 } 01:08:44.342 ]' 01:08:44.342 11:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:08:44.342 11:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:08:44.342 11:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:08:44.342 11:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:08:44.342 11:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:08:44.342 11:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:08:44.342 11:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:08:44.342 11:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:08:44.601 11:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:00:YzEwNGEwNGZlMzFkMzY2M2Y5ZWM2OGQwMzcwNDRjNWYyYWVlODA4Y2ZkMzRjNGNljfoT4w==: --dhchap-ctrl-secret DHHC-1:03:N2ZkYjlhODkyYzA5MTgzMzVkMDZiODUwMWVlZDFjZDgxMGNhZGQzOTViMjdmYTYyOTU0ZGFjMjU5NDNiNDQ4YiUHx6Y=: 01:08:45.169 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:08:45.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:08:45.169 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:08:45.169 11:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:45.169 11:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:45.169 11:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:45.169 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:08:45.169 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:08:45.169 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:08:45.428 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 01:08:45.428 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:08:45.429 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:08:45.429 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 01:08:45.429 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:08:45.429 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:08:45.429 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:08:45.429 11:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:45.429 11:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:45.429 11:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:45.429 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:08:45.429 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:08:45.996 01:08:45.996 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:08:45.996 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:08:45.996 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:08:45.996 11:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:08:45.996 11:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:08:45.996 11:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:45.996 11:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:45.996 11:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:45.996 11:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:08:45.996 { 01:08:45.996 "cntlid": 35, 01:08:45.996 "qid": 0, 01:08:45.996 "state": "enabled", 01:08:45.996 "thread": "nvmf_tgt_poll_group_000", 01:08:45.996 "listen_address": { 01:08:45.996 "trtype": "TCP", 01:08:45.996 "adrfam": "IPv4", 01:08:45.996 "traddr": "10.0.0.2", 01:08:45.996 "trsvcid": "4420" 01:08:45.996 }, 01:08:45.996 "peer_address": { 01:08:45.996 "trtype": "TCP", 01:08:45.996 "adrfam": "IPv4", 01:08:45.996 "traddr": "10.0.0.1", 01:08:45.996 "trsvcid": "38032" 01:08:45.996 }, 01:08:45.996 "auth": { 01:08:45.996 "state": "completed", 01:08:45.996 "digest": "sha256", 01:08:45.996 "dhgroup": "ffdhe6144" 01:08:45.996 } 01:08:45.996 } 01:08:45.996 ]' 01:08:46.255 11:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:08:46.255 11:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:08:46.255 11:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:08:46.255 11:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:08:46.255 11:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:08:46.255 11:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:08:46.255 11:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:08:46.255 11:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:08:46.514 11:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:01:YjA1MTY1ZjA4NmFhNDkxZjdhNjRhNjg0N2EyNzFmZDUvk5EB: --dhchap-ctrl-secret DHHC-1:02:ZTAzN2M3OTQzYmFiNzkxOGI1MjhiMzlkZjNjNjliZjRhNTQyODY4MjBlODQxNjdi3CcgFw==: 01:08:47.082 11:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:08:47.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:08:47.082 11:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:08:47.082 11:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:47.082 11:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:47.082 11:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:47.082 11:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:08:47.083 11:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:08:47.083 11:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:08:47.342 11:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 01:08:47.342 11:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:08:47.342 11:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:08:47.342 11:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 01:08:47.342 11:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:08:47.342 11:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:08:47.342 11:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:08:47.342 11:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:47.342 11:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:47.342 11:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:47.342 11:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:08:47.342 11:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:08:47.601 01:08:47.601 11:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:08:47.601 11:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:08:47.601 11:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:08:47.861 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:08:47.861 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:08:47.861 11:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:47.861 11:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:47.861 11:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:47.861 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:08:47.861 { 01:08:47.861 "cntlid": 37, 01:08:47.861 "qid": 0, 01:08:47.861 "state": "enabled", 01:08:47.861 "thread": "nvmf_tgt_poll_group_000", 01:08:47.861 "listen_address": { 01:08:47.861 "trtype": "TCP", 01:08:47.861 "adrfam": "IPv4", 01:08:47.861 "traddr": "10.0.0.2", 01:08:47.861 "trsvcid": "4420" 01:08:47.861 }, 01:08:47.861 "peer_address": { 01:08:47.861 "trtype": "TCP", 01:08:47.861 "adrfam": "IPv4", 01:08:47.861 "traddr": "10.0.0.1", 01:08:47.861 "trsvcid": "38070" 01:08:47.861 }, 01:08:47.861 "auth": { 01:08:47.861 "state": "completed", 01:08:47.861 "digest": "sha256", 01:08:47.861 "dhgroup": "ffdhe6144" 01:08:47.861 } 01:08:47.861 } 01:08:47.861 ]' 01:08:47.861 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:08:48.119 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:08:48.119 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:08:48.119 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:08:48.119 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:08:48.119 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:08:48.119 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:08:48.119 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:08:48.377 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:02:YzJhNjFhNmZhZTU2NmFjOGU4MmU5MmRmMWUyZjUwZjkxOTE4YTY5OWZhN2Y4YzM3UDoDgg==: --dhchap-ctrl-secret DHHC-1:01:MGUyYWQzMDQ5OWQ1NzdmNTIzYzc4NjNlYWJiZDJmZWGEJNYt: 01:08:48.943 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:08:48.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:08:48.943 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:08:48.943 11:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:48.943 11:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:48.943 11:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:48.943 11:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:08:48.943 11:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:08:48.943 11:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:08:49.200 11:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 01:08:49.200 11:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:08:49.200 11:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:08:49.200 11:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 01:08:49.200 11:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:08:49.200 11:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:08:49.200 11:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key3 01:08:49.200 11:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:49.200 11:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:49.200 11:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:49.200 11:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:08:49.200 11:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:08:49.764 01:08:49.765 11:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:08:49.765 11:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:08:49.765 11:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:08:49.765 11:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:08:49.765 11:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:08:49.765 11:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:49.765 11:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:49.765 11:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:49.765 11:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:08:49.765 { 01:08:49.765 "cntlid": 39, 01:08:49.765 "qid": 0, 01:08:49.765 "state": "enabled", 01:08:49.765 "thread": "nvmf_tgt_poll_group_000", 01:08:49.765 "listen_address": { 01:08:49.765 "trtype": "TCP", 01:08:49.765 "adrfam": "IPv4", 01:08:49.765 "traddr": "10.0.0.2", 01:08:49.765 "trsvcid": "4420" 01:08:49.765 }, 01:08:49.765 "peer_address": { 01:08:49.765 "trtype": "TCP", 01:08:49.765 "adrfam": "IPv4", 01:08:49.765 "traddr": "10.0.0.1", 01:08:49.765 "trsvcid": "56438" 01:08:49.765 }, 01:08:49.765 "auth": { 01:08:49.765 "state": "completed", 01:08:49.765 "digest": "sha256", 01:08:49.765 "dhgroup": "ffdhe6144" 01:08:49.765 } 01:08:49.765 } 01:08:49.765 ]' 01:08:49.765 11:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:08:50.022 11:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:08:50.022 11:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:08:50.022 11:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:08:50.022 11:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:08:50.022 11:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:08:50.022 11:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:08:50.022 11:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:08:50.281 11:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:03:MDZhYWJjYWJmZWM0Y2ZmOWViOTFlY2VhOGVkY2UzODg0MTk5NTc5ZTgwODUzYmI4MTY4MmQwYWYxZDUwMDVhZDpm5s4=: 01:08:50.848 11:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:08:50.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:08:50.848 11:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:08:50.848 11:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:50.848 11:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:50.848 11:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:50.848 11:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:08:50.848 11:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:08:50.848 11:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:08:50.848 11:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:08:51.105 11:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 01:08:51.105 11:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:08:51.105 11:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:08:51.105 11:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 01:08:51.105 11:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:08:51.105 11:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:08:51.105 11:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:08:51.105 11:05:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:51.105 11:05:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:51.105 11:05:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:51.105 11:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:08:51.105 11:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:08:51.668 01:08:51.668 11:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:08:51.668 11:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:08:51.668 11:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:08:51.926 11:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:08:51.926 11:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:08:51.926 11:05:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:51.926 11:05:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:51.926 11:05:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:51.926 11:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:08:51.926 { 01:08:51.926 "cntlid": 41, 01:08:51.926 "qid": 0, 01:08:51.926 "state": "enabled", 01:08:51.926 "thread": "nvmf_tgt_poll_group_000", 01:08:51.926 "listen_address": { 01:08:51.926 "trtype": "TCP", 01:08:51.926 "adrfam": "IPv4", 01:08:51.926 "traddr": "10.0.0.2", 01:08:51.926 "trsvcid": "4420" 01:08:51.926 }, 01:08:51.926 "peer_address": { 01:08:51.926 "trtype": "TCP", 01:08:51.926 "adrfam": "IPv4", 01:08:51.926 "traddr": "10.0.0.1", 01:08:51.926 "trsvcid": "56484" 01:08:51.926 }, 01:08:51.926 "auth": { 01:08:51.926 "state": "completed", 01:08:51.926 "digest": "sha256", 01:08:51.926 "dhgroup": "ffdhe8192" 01:08:51.926 } 01:08:51.926 } 01:08:51.926 ]' 01:08:51.926 11:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:08:51.926 11:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:08:51.926 11:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:08:51.926 11:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:08:51.926 11:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:08:51.926 11:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:08:51.926 11:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:08:51.926 11:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:08:52.224 11:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:00:YzEwNGEwNGZlMzFkMzY2M2Y5ZWM2OGQwMzcwNDRjNWYyYWVlODA4Y2ZkMzRjNGNljfoT4w==: --dhchap-ctrl-secret DHHC-1:03:N2ZkYjlhODkyYzA5MTgzMzVkMDZiODUwMWVlZDFjZDgxMGNhZGQzOTViMjdmYTYyOTU0ZGFjMjU5NDNiNDQ4YiUHx6Y=: 01:08:52.791 11:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:08:52.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:08:52.791 11:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:08:52.791 11:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:52.791 11:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:52.791 11:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:52.791 11:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:08:52.791 11:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:08:52.791 11:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:08:53.049 11:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 01:08:53.049 11:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:08:53.049 11:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:08:53.049 11:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 01:08:53.049 11:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:08:53.049 11:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:08:53.049 11:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:08:53.049 11:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:53.049 11:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:53.049 11:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:53.049 11:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:08:53.049 11:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:08:53.614 01:08:53.614 11:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:08:53.614 11:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:08:53.614 11:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:08:53.871 11:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:08:53.871 11:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:08:53.871 11:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:53.871 11:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:53.871 11:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:53.871 11:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:08:53.871 { 01:08:53.871 "cntlid": 43, 01:08:53.871 "qid": 0, 01:08:53.871 "state": "enabled", 01:08:53.871 "thread": "nvmf_tgt_poll_group_000", 01:08:53.871 "listen_address": { 01:08:53.871 "trtype": "TCP", 01:08:53.871 "adrfam": "IPv4", 01:08:53.871 "traddr": "10.0.0.2", 01:08:53.871 "trsvcid": "4420" 01:08:53.871 }, 01:08:53.871 "peer_address": { 01:08:53.871 "trtype": "TCP", 01:08:53.871 "adrfam": "IPv4", 01:08:53.871 "traddr": "10.0.0.1", 01:08:53.871 "trsvcid": "56498" 01:08:53.871 }, 01:08:53.871 "auth": { 01:08:53.871 "state": "completed", 01:08:53.871 "digest": "sha256", 01:08:53.871 "dhgroup": "ffdhe8192" 01:08:53.871 } 01:08:53.871 } 01:08:53.871 ]' 01:08:53.871 11:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:08:53.871 11:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:08:53.871 11:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:08:53.871 11:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:08:53.871 11:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:08:53.871 11:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:08:53.871 11:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:08:53.871 11:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:08:54.129 11:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:01:YjA1MTY1ZjA4NmFhNDkxZjdhNjRhNjg0N2EyNzFmZDUvk5EB: --dhchap-ctrl-secret DHHC-1:02:ZTAzN2M3OTQzYmFiNzkxOGI1MjhiMzlkZjNjNjliZjRhNTQyODY4MjBlODQxNjdi3CcgFw==: 01:08:54.693 11:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:08:54.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:08:54.694 11:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:08:54.694 11:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:54.694 11:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:54.694 11:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:54.694 11:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:08:54.694 11:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:08:54.694 11:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:08:55.259 11:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 01:08:55.259 11:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:08:55.259 11:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:08:55.259 11:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 01:08:55.259 11:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:08:55.259 11:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:08:55.259 11:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:08:55.259 11:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:55.259 11:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:55.259 11:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:55.259 11:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:08:55.259 11:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:08:55.517 01:08:55.775 11:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:08:55.775 11:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:08:55.775 11:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:08:55.775 11:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:08:55.775 11:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:08:55.775 11:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:55.775 11:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:56.033 11:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:56.033 11:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:08:56.033 { 01:08:56.033 "cntlid": 45, 01:08:56.033 "qid": 0, 01:08:56.033 "state": "enabled", 01:08:56.033 "thread": "nvmf_tgt_poll_group_000", 01:08:56.033 "listen_address": { 01:08:56.033 "trtype": "TCP", 01:08:56.033 "adrfam": "IPv4", 01:08:56.033 "traddr": "10.0.0.2", 01:08:56.033 "trsvcid": "4420" 01:08:56.033 }, 01:08:56.033 "peer_address": { 01:08:56.033 "trtype": "TCP", 01:08:56.033 "adrfam": "IPv4", 01:08:56.033 "traddr": "10.0.0.1", 01:08:56.033 "trsvcid": "56520" 01:08:56.033 }, 01:08:56.033 "auth": { 01:08:56.033 "state": "completed", 01:08:56.033 "digest": "sha256", 01:08:56.033 "dhgroup": "ffdhe8192" 01:08:56.033 } 01:08:56.033 } 01:08:56.033 ]' 01:08:56.033 11:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:08:56.033 11:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:08:56.033 11:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:08:56.033 11:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:08:56.033 11:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:08:56.033 11:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:08:56.033 11:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:08:56.033 11:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:08:56.291 11:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:02:YzJhNjFhNmZhZTU2NmFjOGU4MmU5MmRmMWUyZjUwZjkxOTE4YTY5OWZhN2Y4YzM3UDoDgg==: --dhchap-ctrl-secret DHHC-1:01:MGUyYWQzMDQ5OWQ1NzdmNTIzYzc4NjNlYWJiZDJmZWGEJNYt: 01:08:56.856 11:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:08:56.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:08:56.856 11:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:08:56.856 11:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:56.856 11:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:56.856 11:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:56.856 11:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:08:56.856 11:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:08:56.856 11:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:08:57.115 11:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 01:08:57.115 11:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:08:57.115 11:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:08:57.115 11:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 01:08:57.115 11:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:08:57.115 11:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:08:57.115 11:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key3 01:08:57.115 11:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:57.115 11:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:57.115 11:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:57.115 11:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:08:57.115 11:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:08:57.682 01:08:57.682 11:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:08:57.682 11:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:08:57.682 11:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:08:57.939 11:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:08:57.939 11:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:08:57.939 11:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:57.939 11:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:57.939 11:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:57.939 11:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:08:57.939 { 01:08:57.939 "cntlid": 47, 01:08:57.939 "qid": 0, 01:08:57.939 "state": "enabled", 01:08:57.939 "thread": "nvmf_tgt_poll_group_000", 01:08:57.939 "listen_address": { 01:08:57.939 "trtype": "TCP", 01:08:57.939 "adrfam": "IPv4", 01:08:57.939 "traddr": "10.0.0.2", 01:08:57.939 "trsvcid": "4420" 01:08:57.939 }, 01:08:57.939 "peer_address": { 01:08:57.939 "trtype": "TCP", 01:08:57.940 "adrfam": "IPv4", 01:08:57.940 "traddr": "10.0.0.1", 01:08:57.940 "trsvcid": "56540" 01:08:57.940 }, 01:08:57.940 "auth": { 01:08:57.940 "state": "completed", 01:08:57.940 "digest": "sha256", 01:08:57.940 "dhgroup": "ffdhe8192" 01:08:57.940 } 01:08:57.940 } 01:08:57.940 ]' 01:08:57.940 11:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:08:57.940 11:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:08:57.940 11:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:08:57.940 11:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:08:57.940 11:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:08:57.940 11:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:08:57.940 11:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:08:57.940 11:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:08:58.197 11:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:03:MDZhYWJjYWJmZWM0Y2ZmOWViOTFlY2VhOGVkY2UzODg0MTk5NTc5ZTgwODUzYmI4MTY4MmQwYWYxZDUwMDVhZDpm5s4=: 01:08:58.765 11:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:08:58.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:08:58.765 11:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:08:58.765 11:06:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:58.765 11:06:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:58.765 11:06:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:58.765 11:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 01:08:58.765 11:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:08:58.765 11:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:08:58.765 11:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:08:58.765 11:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:08:59.023 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 01:08:59.023 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:08:59.023 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:08:59.023 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 01:08:59.023 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:08:59.023 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:08:59.023 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:08:59.023 11:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:59.023 11:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:59.023 11:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:59.023 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:08:59.023 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:08:59.281 01:08:59.281 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:08:59.281 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:08:59.281 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:08:59.542 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:08:59.542 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:08:59.542 11:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:59.542 11:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:08:59.542 11:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:59.542 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:08:59.542 { 01:08:59.542 "cntlid": 49, 01:08:59.542 "qid": 0, 01:08:59.542 "state": "enabled", 01:08:59.542 "thread": "nvmf_tgt_poll_group_000", 01:08:59.542 "listen_address": { 01:08:59.542 "trtype": "TCP", 01:08:59.542 "adrfam": "IPv4", 01:08:59.542 "traddr": "10.0.0.2", 01:08:59.542 "trsvcid": "4420" 01:08:59.542 }, 01:08:59.542 "peer_address": { 01:08:59.542 "trtype": "TCP", 01:08:59.542 "adrfam": "IPv4", 01:08:59.542 "traddr": "10.0.0.1", 01:08:59.542 "trsvcid": "49256" 01:08:59.542 }, 01:08:59.542 "auth": { 01:08:59.542 "state": "completed", 01:08:59.542 "digest": "sha384", 01:08:59.542 "dhgroup": "null" 01:08:59.542 } 01:08:59.542 } 01:08:59.542 ]' 01:08:59.542 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:08:59.542 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:08:59.542 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:08:59.801 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 01:08:59.801 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:08:59.801 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:08:59.801 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:08:59.801 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:00.059 11:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:00:YzEwNGEwNGZlMzFkMzY2M2Y5ZWM2OGQwMzcwNDRjNWYyYWVlODA4Y2ZkMzRjNGNljfoT4w==: --dhchap-ctrl-secret DHHC-1:03:N2ZkYjlhODkyYzA5MTgzMzVkMDZiODUwMWVlZDFjZDgxMGNhZGQzOTViMjdmYTYyOTU0ZGFjMjU5NDNiNDQ4YiUHx6Y=: 01:09:00.626 11:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:00.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:00.626 11:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:00.626 11:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:00.626 11:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:00.626 11:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:00.626 11:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:00.626 11:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:09:00.626 11:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:09:00.885 11:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 01:09:00.885 11:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:00.885 11:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:09:00.885 11:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 01:09:00.885 11:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:09:00.885 11:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:00.885 11:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:09:00.885 11:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:00.885 11:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:00.885 11:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:00.885 11:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:09:00.885 11:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:09:01.144 01:09:01.144 11:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:01.144 11:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:01.144 11:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:01.403 11:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:01.403 11:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:01.403 11:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:01.403 11:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:01.403 11:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:01.403 11:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:01.403 { 01:09:01.403 "cntlid": 51, 01:09:01.403 "qid": 0, 01:09:01.403 "state": "enabled", 01:09:01.403 "thread": "nvmf_tgt_poll_group_000", 01:09:01.403 "listen_address": { 01:09:01.403 "trtype": "TCP", 01:09:01.403 "adrfam": "IPv4", 01:09:01.403 "traddr": "10.0.0.2", 01:09:01.403 "trsvcid": "4420" 01:09:01.403 }, 01:09:01.403 "peer_address": { 01:09:01.403 "trtype": "TCP", 01:09:01.403 "adrfam": "IPv4", 01:09:01.403 "traddr": "10.0.0.1", 01:09:01.403 "trsvcid": "49282" 01:09:01.403 }, 01:09:01.403 "auth": { 01:09:01.403 "state": "completed", 01:09:01.403 "digest": "sha384", 01:09:01.403 "dhgroup": "null" 01:09:01.403 } 01:09:01.403 } 01:09:01.403 ]' 01:09:01.403 11:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:01.403 11:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:09:01.403 11:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:01.403 11:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 01:09:01.403 11:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:01.403 11:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:01.403 11:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:01.403 11:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:01.662 11:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:01:YjA1MTY1ZjA4NmFhNDkxZjdhNjRhNjg0N2EyNzFmZDUvk5EB: --dhchap-ctrl-secret DHHC-1:02:ZTAzN2M3OTQzYmFiNzkxOGI1MjhiMzlkZjNjNjliZjRhNTQyODY4MjBlODQxNjdi3CcgFw==: 01:09:02.230 11:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:02.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:02.231 11:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:02.231 11:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:02.231 11:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:02.231 11:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:02.231 11:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:02.231 11:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:09:02.231 11:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:09:02.489 11:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 01:09:02.489 11:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:02.489 11:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:09:02.489 11:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 01:09:02.489 11:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:09:02.489 11:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:02.489 11:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:09:02.489 11:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:02.489 11:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:02.489 11:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:02.489 11:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:09:02.489 11:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:09:02.748 01:09:02.748 11:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:02.748 11:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:02.748 11:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:03.007 11:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:03.007 11:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:03.007 11:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:03.007 11:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:03.007 11:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:03.007 11:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:03.007 { 01:09:03.007 "cntlid": 53, 01:09:03.007 "qid": 0, 01:09:03.007 "state": "enabled", 01:09:03.007 "thread": "nvmf_tgt_poll_group_000", 01:09:03.007 "listen_address": { 01:09:03.007 "trtype": "TCP", 01:09:03.007 "adrfam": "IPv4", 01:09:03.007 "traddr": "10.0.0.2", 01:09:03.007 "trsvcid": "4420" 01:09:03.007 }, 01:09:03.007 "peer_address": { 01:09:03.007 "trtype": "TCP", 01:09:03.007 "adrfam": "IPv4", 01:09:03.007 "traddr": "10.0.0.1", 01:09:03.007 "trsvcid": "49308" 01:09:03.007 }, 01:09:03.007 "auth": { 01:09:03.007 "state": "completed", 01:09:03.007 "digest": "sha384", 01:09:03.007 "dhgroup": "null" 01:09:03.007 } 01:09:03.007 } 01:09:03.007 ]' 01:09:03.007 11:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:03.007 11:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:09:03.007 11:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:03.265 11:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 01:09:03.265 11:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:03.265 11:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:03.265 11:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:03.265 11:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:03.522 11:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:02:YzJhNjFhNmZhZTU2NmFjOGU4MmU5MmRmMWUyZjUwZjkxOTE4YTY5OWZhN2Y4YzM3UDoDgg==: --dhchap-ctrl-secret DHHC-1:01:MGUyYWQzMDQ5OWQ1NzdmNTIzYzc4NjNlYWJiZDJmZWGEJNYt: 01:09:04.088 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:04.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:04.088 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:04.088 11:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:04.088 11:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:04.088 11:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:04.088 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:04.088 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:09:04.088 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:09:04.345 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 01:09:04.345 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:04.345 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:09:04.345 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 01:09:04.345 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:09:04.345 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:04.345 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key3 01:09:04.345 11:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:04.345 11:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:04.345 11:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:04.345 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:09:04.345 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:09:04.603 01:09:04.603 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:04.603 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:04.603 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:04.603 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:04.603 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:04.603 11:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:04.603 11:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:04.603 11:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:04.603 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:04.603 { 01:09:04.603 "cntlid": 55, 01:09:04.603 "qid": 0, 01:09:04.603 "state": "enabled", 01:09:04.603 "thread": "nvmf_tgt_poll_group_000", 01:09:04.603 "listen_address": { 01:09:04.603 "trtype": "TCP", 01:09:04.603 "adrfam": "IPv4", 01:09:04.603 "traddr": "10.0.0.2", 01:09:04.603 "trsvcid": "4420" 01:09:04.603 }, 01:09:04.603 "peer_address": { 01:09:04.603 "trtype": "TCP", 01:09:04.603 "adrfam": "IPv4", 01:09:04.603 "traddr": "10.0.0.1", 01:09:04.603 "trsvcid": "49338" 01:09:04.603 }, 01:09:04.603 "auth": { 01:09:04.603 "state": "completed", 01:09:04.603 "digest": "sha384", 01:09:04.603 "dhgroup": "null" 01:09:04.603 } 01:09:04.603 } 01:09:04.603 ]' 01:09:04.603 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:04.860 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:09:04.860 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:04.860 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 01:09:04.860 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:04.860 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:04.860 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:04.860 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:05.119 11:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:03:MDZhYWJjYWJmZWM0Y2ZmOWViOTFlY2VhOGVkY2UzODg0MTk5NTc5ZTgwODUzYmI4MTY4MmQwYWYxZDUwMDVhZDpm5s4=: 01:09:05.684 11:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:05.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:05.684 11:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:05.684 11:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:05.684 11:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:05.684 11:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:05.684 11:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:09:05.684 11:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:05.684 11:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:09:05.684 11:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:09:05.943 11:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 01:09:05.943 11:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:05.943 11:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:09:05.943 11:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 01:09:05.943 11:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:09:05.943 11:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:05.943 11:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:09:05.943 11:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:05.943 11:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:05.943 11:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:05.943 11:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:09:05.943 11:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:09:06.201 01:09:06.201 11:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:06.201 11:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:06.201 11:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:06.459 11:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:06.459 11:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:06.459 11:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:06.459 11:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:06.459 11:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:06.459 11:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:06.459 { 01:09:06.459 "cntlid": 57, 01:09:06.459 "qid": 0, 01:09:06.459 "state": "enabled", 01:09:06.459 "thread": "nvmf_tgt_poll_group_000", 01:09:06.459 "listen_address": { 01:09:06.459 "trtype": "TCP", 01:09:06.459 "adrfam": "IPv4", 01:09:06.459 "traddr": "10.0.0.2", 01:09:06.459 "trsvcid": "4420" 01:09:06.459 }, 01:09:06.459 "peer_address": { 01:09:06.459 "trtype": "TCP", 01:09:06.459 "adrfam": "IPv4", 01:09:06.459 "traddr": "10.0.0.1", 01:09:06.459 "trsvcid": "49352" 01:09:06.459 }, 01:09:06.459 "auth": { 01:09:06.459 "state": "completed", 01:09:06.459 "digest": "sha384", 01:09:06.459 "dhgroup": "ffdhe2048" 01:09:06.459 } 01:09:06.459 } 01:09:06.459 ]' 01:09:06.459 11:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:06.459 11:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:09:06.459 11:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:06.459 11:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:09:06.459 11:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:06.459 11:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:06.459 11:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:06.459 11:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:06.717 11:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:00:YzEwNGEwNGZlMzFkMzY2M2Y5ZWM2OGQwMzcwNDRjNWYyYWVlODA4Y2ZkMzRjNGNljfoT4w==: --dhchap-ctrl-secret DHHC-1:03:N2ZkYjlhODkyYzA5MTgzMzVkMDZiODUwMWVlZDFjZDgxMGNhZGQzOTViMjdmYTYyOTU0ZGFjMjU5NDNiNDQ4YiUHx6Y=: 01:09:07.282 11:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:07.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:07.282 11:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:07.282 11:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:07.282 11:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:07.282 11:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:07.282 11:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:07.282 11:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:09:07.282 11:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:09:07.540 11:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 01:09:07.540 11:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:07.540 11:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:09:07.540 11:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 01:09:07.540 11:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:09:07.540 11:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:07.540 11:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:09:07.540 11:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:07.540 11:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:07.540 11:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:07.540 11:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:09:07.540 11:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:09:07.798 01:09:07.798 11:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:07.798 11:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:07.798 11:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:08.054 11:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:08.054 11:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:08.054 11:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:08.054 11:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:08.054 11:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:08.054 11:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:08.054 { 01:09:08.054 "cntlid": 59, 01:09:08.054 "qid": 0, 01:09:08.054 "state": "enabled", 01:09:08.054 "thread": "nvmf_tgt_poll_group_000", 01:09:08.054 "listen_address": { 01:09:08.054 "trtype": "TCP", 01:09:08.054 "adrfam": "IPv4", 01:09:08.054 "traddr": "10.0.0.2", 01:09:08.054 "trsvcid": "4420" 01:09:08.054 }, 01:09:08.054 "peer_address": { 01:09:08.054 "trtype": "TCP", 01:09:08.054 "adrfam": "IPv4", 01:09:08.054 "traddr": "10.0.0.1", 01:09:08.054 "trsvcid": "49392" 01:09:08.054 }, 01:09:08.054 "auth": { 01:09:08.054 "state": "completed", 01:09:08.054 "digest": "sha384", 01:09:08.054 "dhgroup": "ffdhe2048" 01:09:08.054 } 01:09:08.054 } 01:09:08.054 ]' 01:09:08.054 11:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:08.054 11:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:09:08.054 11:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:08.311 11:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:09:08.311 11:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:08.311 11:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:08.311 11:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:08.311 11:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:08.569 11:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:01:YjA1MTY1ZjA4NmFhNDkxZjdhNjRhNjg0N2EyNzFmZDUvk5EB: --dhchap-ctrl-secret DHHC-1:02:ZTAzN2M3OTQzYmFiNzkxOGI1MjhiMzlkZjNjNjliZjRhNTQyODY4MjBlODQxNjdi3CcgFw==: 01:09:09.134 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:09.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:09.134 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:09.134 11:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:09.134 11:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:09.134 11:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:09.134 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:09.134 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:09:09.134 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:09:09.134 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 01:09:09.134 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:09.134 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:09:09.134 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 01:09:09.134 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:09:09.134 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:09.134 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:09:09.134 11:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:09.134 11:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:09.134 11:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:09.134 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:09:09.393 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:09:09.664 01:09:09.664 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:09.664 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:09.664 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:09.664 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:09.664 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:09.664 11:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:09.664 11:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:09.664 11:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:09.664 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:09.664 { 01:09:09.664 "cntlid": 61, 01:09:09.664 "qid": 0, 01:09:09.664 "state": "enabled", 01:09:09.664 "thread": "nvmf_tgt_poll_group_000", 01:09:09.664 "listen_address": { 01:09:09.664 "trtype": "TCP", 01:09:09.664 "adrfam": "IPv4", 01:09:09.664 "traddr": "10.0.0.2", 01:09:09.664 "trsvcid": "4420" 01:09:09.664 }, 01:09:09.664 "peer_address": { 01:09:09.664 "trtype": "TCP", 01:09:09.664 "adrfam": "IPv4", 01:09:09.664 "traddr": "10.0.0.1", 01:09:09.665 "trsvcid": "54806" 01:09:09.665 }, 01:09:09.665 "auth": { 01:09:09.665 "state": "completed", 01:09:09.665 "digest": "sha384", 01:09:09.665 "dhgroup": "ffdhe2048" 01:09:09.665 } 01:09:09.665 } 01:09:09.665 ]' 01:09:09.948 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:09.948 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:09:09.948 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:09.948 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:09:09.948 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:09.948 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:09.948 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:09.948 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:10.205 11:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:02:YzJhNjFhNmZhZTU2NmFjOGU4MmU5MmRmMWUyZjUwZjkxOTE4YTY5OWZhN2Y4YzM3UDoDgg==: --dhchap-ctrl-secret DHHC-1:01:MGUyYWQzMDQ5OWQ1NzdmNTIzYzc4NjNlYWJiZDJmZWGEJNYt: 01:09:10.772 11:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:10.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:10.772 11:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:10.772 11:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:10.772 11:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:10.772 11:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:10.772 11:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:10.772 11:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:09:10.772 11:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:09:11.029 11:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 01:09:11.029 11:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:11.029 11:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:09:11.029 11:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 01:09:11.029 11:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:09:11.029 11:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:11.030 11:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key3 01:09:11.030 11:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:11.030 11:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:11.030 11:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:11.030 11:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:09:11.030 11:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:09:11.288 01:09:11.288 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:11.288 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:11.288 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:11.288 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:11.288 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:11.288 11:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:11.288 11:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:11.546 11:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:11.547 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:11.547 { 01:09:11.547 "cntlid": 63, 01:09:11.547 "qid": 0, 01:09:11.547 "state": "enabled", 01:09:11.547 "thread": "nvmf_tgt_poll_group_000", 01:09:11.547 "listen_address": { 01:09:11.547 "trtype": "TCP", 01:09:11.547 "adrfam": "IPv4", 01:09:11.547 "traddr": "10.0.0.2", 01:09:11.547 "trsvcid": "4420" 01:09:11.547 }, 01:09:11.547 "peer_address": { 01:09:11.547 "trtype": "TCP", 01:09:11.547 "adrfam": "IPv4", 01:09:11.547 "traddr": "10.0.0.1", 01:09:11.547 "trsvcid": "54830" 01:09:11.547 }, 01:09:11.547 "auth": { 01:09:11.547 "state": "completed", 01:09:11.547 "digest": "sha384", 01:09:11.547 "dhgroup": "ffdhe2048" 01:09:11.547 } 01:09:11.547 } 01:09:11.547 ]' 01:09:11.547 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:11.547 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:09:11.547 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:11.547 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:09:11.547 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:11.547 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:11.547 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:11.547 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:11.805 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:03:MDZhYWJjYWJmZWM0Y2ZmOWViOTFlY2VhOGVkY2UzODg0MTk5NTc5ZTgwODUzYmI4MTY4MmQwYWYxZDUwMDVhZDpm5s4=: 01:09:12.371 11:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:12.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:12.371 11:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:12.371 11:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:12.371 11:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:12.371 11:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:12.371 11:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:09:12.371 11:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:12.371 11:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:09:12.371 11:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:09:12.629 11:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 01:09:12.629 11:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:12.629 11:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:09:12.629 11:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 01:09:12.629 11:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:09:12.629 11:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:12.629 11:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:09:12.629 11:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:12.629 11:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:12.629 11:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:12.629 11:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:09:12.629 11:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:09:12.886 01:09:12.886 11:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:12.886 11:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:12.886 11:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:13.144 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:13.144 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:13.144 11:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:13.144 11:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:13.144 11:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:13.144 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:13.144 { 01:09:13.144 "cntlid": 65, 01:09:13.144 "qid": 0, 01:09:13.144 "state": "enabled", 01:09:13.144 "thread": "nvmf_tgt_poll_group_000", 01:09:13.144 "listen_address": { 01:09:13.144 "trtype": "TCP", 01:09:13.144 "adrfam": "IPv4", 01:09:13.144 "traddr": "10.0.0.2", 01:09:13.144 "trsvcid": "4420" 01:09:13.144 }, 01:09:13.144 "peer_address": { 01:09:13.144 "trtype": "TCP", 01:09:13.144 "adrfam": "IPv4", 01:09:13.144 "traddr": "10.0.0.1", 01:09:13.144 "trsvcid": "54850" 01:09:13.144 }, 01:09:13.144 "auth": { 01:09:13.144 "state": "completed", 01:09:13.144 "digest": "sha384", 01:09:13.144 "dhgroup": "ffdhe3072" 01:09:13.144 } 01:09:13.144 } 01:09:13.144 ]' 01:09:13.144 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:13.144 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:09:13.144 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:13.144 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:09:13.144 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:13.144 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:13.144 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:13.144 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:13.401 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:00:YzEwNGEwNGZlMzFkMzY2M2Y5ZWM2OGQwMzcwNDRjNWYyYWVlODA4Y2ZkMzRjNGNljfoT4w==: --dhchap-ctrl-secret DHHC-1:03:N2ZkYjlhODkyYzA5MTgzMzVkMDZiODUwMWVlZDFjZDgxMGNhZGQzOTViMjdmYTYyOTU0ZGFjMjU5NDNiNDQ4YiUHx6Y=: 01:09:13.965 11:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:13.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:13.965 11:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:13.965 11:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:13.965 11:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:13.965 11:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:13.965 11:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:13.965 11:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:09:13.965 11:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:09:14.255 11:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 01:09:14.255 11:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:14.255 11:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:09:14.255 11:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 01:09:14.255 11:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:09:14.255 11:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:14.255 11:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:09:14.255 11:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:14.255 11:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:14.255 11:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:14.255 11:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:09:14.255 11:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:09:14.512 01:09:14.512 11:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:14.512 11:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:14.512 11:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:14.769 11:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:14.769 11:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:14.769 11:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:14.769 11:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:14.769 11:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:14.769 11:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:14.769 { 01:09:14.769 "cntlid": 67, 01:09:14.769 "qid": 0, 01:09:14.769 "state": "enabled", 01:09:14.769 "thread": "nvmf_tgt_poll_group_000", 01:09:14.769 "listen_address": { 01:09:14.769 "trtype": "TCP", 01:09:14.769 "adrfam": "IPv4", 01:09:14.769 "traddr": "10.0.0.2", 01:09:14.769 "trsvcid": "4420" 01:09:14.769 }, 01:09:14.769 "peer_address": { 01:09:14.769 "trtype": "TCP", 01:09:14.769 "adrfam": "IPv4", 01:09:14.769 "traddr": "10.0.0.1", 01:09:14.769 "trsvcid": "54874" 01:09:14.769 }, 01:09:14.769 "auth": { 01:09:14.769 "state": "completed", 01:09:14.769 "digest": "sha384", 01:09:14.769 "dhgroup": "ffdhe3072" 01:09:14.769 } 01:09:14.769 } 01:09:14.769 ]' 01:09:14.769 11:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:14.769 11:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:09:14.769 11:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:14.769 11:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:09:14.769 11:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:14.769 11:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:14.769 11:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:14.769 11:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:15.026 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:01:YjA1MTY1ZjA4NmFhNDkxZjdhNjRhNjg0N2EyNzFmZDUvk5EB: --dhchap-ctrl-secret DHHC-1:02:ZTAzN2M3OTQzYmFiNzkxOGI1MjhiMzlkZjNjNjliZjRhNTQyODY4MjBlODQxNjdi3CcgFw==: 01:09:15.590 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:15.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:15.590 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:15.590 11:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:15.590 11:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:15.590 11:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:15.590 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:15.590 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:09:15.590 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:09:15.849 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 01:09:15.849 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:15.849 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:09:15.849 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 01:09:15.849 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:09:15.849 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:15.849 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:09:15.849 11:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:15.849 11:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:15.849 11:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:15.849 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:09:15.849 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:09:16.107 01:09:16.107 11:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:16.107 11:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:16.107 11:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:16.364 11:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:16.364 11:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:16.364 11:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:16.364 11:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:16.364 11:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:16.364 11:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:16.364 { 01:09:16.364 "cntlid": 69, 01:09:16.364 "qid": 0, 01:09:16.364 "state": "enabled", 01:09:16.364 "thread": "nvmf_tgt_poll_group_000", 01:09:16.364 "listen_address": { 01:09:16.364 "trtype": "TCP", 01:09:16.364 "adrfam": "IPv4", 01:09:16.364 "traddr": "10.0.0.2", 01:09:16.364 "trsvcid": "4420" 01:09:16.364 }, 01:09:16.364 "peer_address": { 01:09:16.364 "trtype": "TCP", 01:09:16.364 "adrfam": "IPv4", 01:09:16.364 "traddr": "10.0.0.1", 01:09:16.364 "trsvcid": "54882" 01:09:16.364 }, 01:09:16.364 "auth": { 01:09:16.364 "state": "completed", 01:09:16.364 "digest": "sha384", 01:09:16.364 "dhgroup": "ffdhe3072" 01:09:16.364 } 01:09:16.364 } 01:09:16.364 ]' 01:09:16.364 11:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:16.364 11:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:09:16.364 11:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:16.621 11:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:09:16.621 11:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:16.621 11:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:16.621 11:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:16.621 11:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:16.879 11:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:02:YzJhNjFhNmZhZTU2NmFjOGU4MmU5MmRmMWUyZjUwZjkxOTE4YTY5OWZhN2Y4YzM3UDoDgg==: --dhchap-ctrl-secret DHHC-1:01:MGUyYWQzMDQ5OWQ1NzdmNTIzYzc4NjNlYWJiZDJmZWGEJNYt: 01:09:17.448 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:17.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:17.448 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:17.448 11:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:17.448 11:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:17.448 11:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:17.448 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:17.448 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:09:17.448 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:09:17.706 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 01:09:17.706 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:17.706 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:09:17.706 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 01:09:17.706 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:09:17.706 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:17.706 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key3 01:09:17.706 11:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:17.706 11:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:17.706 11:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:17.706 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:09:17.706 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:09:17.963 01:09:17.963 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:17.964 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:17.964 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:18.221 11:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:18.221 11:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:18.221 11:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:18.221 11:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:18.221 11:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:18.221 11:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:18.221 { 01:09:18.221 "cntlid": 71, 01:09:18.221 "qid": 0, 01:09:18.221 "state": "enabled", 01:09:18.221 "thread": "nvmf_tgt_poll_group_000", 01:09:18.221 "listen_address": { 01:09:18.221 "trtype": "TCP", 01:09:18.221 "adrfam": "IPv4", 01:09:18.221 "traddr": "10.0.0.2", 01:09:18.221 "trsvcid": "4420" 01:09:18.221 }, 01:09:18.221 "peer_address": { 01:09:18.221 "trtype": "TCP", 01:09:18.221 "adrfam": "IPv4", 01:09:18.221 "traddr": "10.0.0.1", 01:09:18.221 "trsvcid": "54898" 01:09:18.221 }, 01:09:18.221 "auth": { 01:09:18.221 "state": "completed", 01:09:18.221 "digest": "sha384", 01:09:18.221 "dhgroup": "ffdhe3072" 01:09:18.221 } 01:09:18.221 } 01:09:18.221 ]' 01:09:18.221 11:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:18.221 11:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:09:18.221 11:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:18.221 11:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:09:18.221 11:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:18.221 11:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:18.221 11:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:18.221 11:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:18.478 11:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:03:MDZhYWJjYWJmZWM0Y2ZmOWViOTFlY2VhOGVkY2UzODg0MTk5NTc5ZTgwODUzYmI4MTY4MmQwYWYxZDUwMDVhZDpm5s4=: 01:09:19.045 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:19.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:19.045 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:19.045 11:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:19.045 11:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:19.045 11:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:19.045 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:09:19.045 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:19.045 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:09:19.045 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:09:19.304 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 01:09:19.304 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:19.304 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:09:19.304 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 01:09:19.304 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:09:19.304 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:19.304 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:09:19.304 11:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:19.304 11:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:19.304 11:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:19.304 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:09:19.304 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:09:19.563 01:09:19.563 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:19.563 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:19.563 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:19.820 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:19.820 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:19.820 11:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:19.820 11:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:19.820 11:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:19.820 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:19.820 { 01:09:19.820 "cntlid": 73, 01:09:19.820 "qid": 0, 01:09:19.820 "state": "enabled", 01:09:19.820 "thread": "nvmf_tgt_poll_group_000", 01:09:19.820 "listen_address": { 01:09:19.820 "trtype": "TCP", 01:09:19.820 "adrfam": "IPv4", 01:09:19.820 "traddr": "10.0.0.2", 01:09:19.820 "trsvcid": "4420" 01:09:19.820 }, 01:09:19.820 "peer_address": { 01:09:19.820 "trtype": "TCP", 01:09:19.820 "adrfam": "IPv4", 01:09:19.820 "traddr": "10.0.0.1", 01:09:19.820 "trsvcid": "34362" 01:09:19.820 }, 01:09:19.820 "auth": { 01:09:19.820 "state": "completed", 01:09:19.820 "digest": "sha384", 01:09:19.820 "dhgroup": "ffdhe4096" 01:09:19.820 } 01:09:19.820 } 01:09:19.820 ]' 01:09:19.820 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:19.820 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:09:19.821 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:19.821 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:09:19.821 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:19.821 11:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:19.821 11:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:19.821 11:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:20.079 11:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:00:YzEwNGEwNGZlMzFkMzY2M2Y5ZWM2OGQwMzcwNDRjNWYyYWVlODA4Y2ZkMzRjNGNljfoT4w==: --dhchap-ctrl-secret DHHC-1:03:N2ZkYjlhODkyYzA5MTgzMzVkMDZiODUwMWVlZDFjZDgxMGNhZGQzOTViMjdmYTYyOTU0ZGFjMjU5NDNiNDQ4YiUHx6Y=: 01:09:20.650 11:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:20.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:20.650 11:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:20.650 11:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:20.650 11:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:20.650 11:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:20.650 11:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:20.650 11:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:09:20.650 11:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:09:20.909 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 01:09:20.909 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:20.909 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:09:20.909 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 01:09:20.909 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:09:20.909 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:20.909 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:09:20.909 11:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:20.909 11:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:20.909 11:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:20.909 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:09:20.909 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:09:21.168 01:09:21.427 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:21.427 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:21.427 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:21.427 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:21.427 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:21.427 11:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:21.427 11:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:21.427 11:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:21.427 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:21.427 { 01:09:21.427 "cntlid": 75, 01:09:21.427 "qid": 0, 01:09:21.427 "state": "enabled", 01:09:21.427 "thread": "nvmf_tgt_poll_group_000", 01:09:21.427 "listen_address": { 01:09:21.427 "trtype": "TCP", 01:09:21.427 "adrfam": "IPv4", 01:09:21.427 "traddr": "10.0.0.2", 01:09:21.427 "trsvcid": "4420" 01:09:21.427 }, 01:09:21.427 "peer_address": { 01:09:21.427 "trtype": "TCP", 01:09:21.427 "adrfam": "IPv4", 01:09:21.427 "traddr": "10.0.0.1", 01:09:21.427 "trsvcid": "34380" 01:09:21.427 }, 01:09:21.427 "auth": { 01:09:21.427 "state": "completed", 01:09:21.427 "digest": "sha384", 01:09:21.427 "dhgroup": "ffdhe4096" 01:09:21.427 } 01:09:21.427 } 01:09:21.427 ]' 01:09:21.427 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:21.686 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:09:21.686 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:21.686 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:09:21.686 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:21.686 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:21.686 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:21.686 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:21.945 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:01:YjA1MTY1ZjA4NmFhNDkxZjdhNjRhNjg0N2EyNzFmZDUvk5EB: --dhchap-ctrl-secret DHHC-1:02:ZTAzN2M3OTQzYmFiNzkxOGI1MjhiMzlkZjNjNjliZjRhNTQyODY4MjBlODQxNjdi3CcgFw==: 01:09:22.512 11:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:22.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:22.512 11:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:22.512 11:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:22.512 11:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:22.512 11:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:22.512 11:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:22.512 11:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:09:22.512 11:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:09:22.772 11:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 01:09:22.772 11:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:22.772 11:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:09:22.772 11:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 01:09:22.772 11:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:09:22.772 11:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:22.772 11:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:09:22.772 11:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:22.772 11:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:22.772 11:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:22.772 11:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:09:22.772 11:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:09:23.030 01:09:23.030 11:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:23.030 11:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:23.030 11:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:23.289 11:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:23.289 11:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:23.289 11:06:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:23.289 11:06:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:23.289 11:06:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:23.289 11:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:23.289 { 01:09:23.289 "cntlid": 77, 01:09:23.289 "qid": 0, 01:09:23.289 "state": "enabled", 01:09:23.289 "thread": "nvmf_tgt_poll_group_000", 01:09:23.289 "listen_address": { 01:09:23.289 "trtype": "TCP", 01:09:23.289 "adrfam": "IPv4", 01:09:23.289 "traddr": "10.0.0.2", 01:09:23.289 "trsvcid": "4420" 01:09:23.289 }, 01:09:23.289 "peer_address": { 01:09:23.289 "trtype": "TCP", 01:09:23.289 "adrfam": "IPv4", 01:09:23.289 "traddr": "10.0.0.1", 01:09:23.289 "trsvcid": "34394" 01:09:23.289 }, 01:09:23.289 "auth": { 01:09:23.289 "state": "completed", 01:09:23.289 "digest": "sha384", 01:09:23.289 "dhgroup": "ffdhe4096" 01:09:23.289 } 01:09:23.289 } 01:09:23.289 ]' 01:09:23.289 11:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:23.289 11:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:09:23.289 11:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:23.289 11:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:09:23.289 11:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:23.289 11:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:23.289 11:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:23.289 11:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:23.549 11:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:02:YzJhNjFhNmZhZTU2NmFjOGU4MmU5MmRmMWUyZjUwZjkxOTE4YTY5OWZhN2Y4YzM3UDoDgg==: --dhchap-ctrl-secret DHHC-1:01:MGUyYWQzMDQ5OWQ1NzdmNTIzYzc4NjNlYWJiZDJmZWGEJNYt: 01:09:24.115 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:24.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:24.115 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:24.115 11:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:24.115 11:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:24.115 11:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:24.115 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:24.115 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:09:24.115 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:09:24.372 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 01:09:24.372 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:24.373 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:09:24.373 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 01:09:24.373 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:09:24.373 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:24.373 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key3 01:09:24.373 11:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:24.373 11:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:24.373 11:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:24.373 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:09:24.373 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:09:24.632 01:09:24.632 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:24.632 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:24.632 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:24.904 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:24.904 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:24.904 11:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:24.904 11:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:24.904 11:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:24.904 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:24.904 { 01:09:24.904 "cntlid": 79, 01:09:24.904 "qid": 0, 01:09:24.904 "state": "enabled", 01:09:24.904 "thread": "nvmf_tgt_poll_group_000", 01:09:24.904 "listen_address": { 01:09:24.904 "trtype": "TCP", 01:09:24.904 "adrfam": "IPv4", 01:09:24.904 "traddr": "10.0.0.2", 01:09:24.904 "trsvcid": "4420" 01:09:24.904 }, 01:09:24.904 "peer_address": { 01:09:24.904 "trtype": "TCP", 01:09:24.904 "adrfam": "IPv4", 01:09:24.904 "traddr": "10.0.0.1", 01:09:24.904 "trsvcid": "34434" 01:09:24.904 }, 01:09:24.904 "auth": { 01:09:24.904 "state": "completed", 01:09:24.904 "digest": "sha384", 01:09:24.904 "dhgroup": "ffdhe4096" 01:09:24.904 } 01:09:24.904 } 01:09:24.904 ]' 01:09:24.904 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:24.904 11:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:09:24.904 11:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:24.904 11:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:09:24.904 11:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:24.904 11:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:24.904 11:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:24.904 11:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:25.162 11:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:03:MDZhYWJjYWJmZWM0Y2ZmOWViOTFlY2VhOGVkY2UzODg0MTk5NTc5ZTgwODUzYmI4MTY4MmQwYWYxZDUwMDVhZDpm5s4=: 01:09:25.728 11:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:25.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:25.728 11:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:25.728 11:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:25.728 11:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:25.728 11:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:25.728 11:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:09:25.728 11:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:25.728 11:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:09:25.728 11:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:09:25.985 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 01:09:25.985 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:25.985 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:09:25.985 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 01:09:25.985 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:09:25.985 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:25.985 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:09:25.985 11:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:25.985 11:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:25.985 11:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:25.985 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:09:25.985 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:09:26.243 01:09:26.502 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:26.502 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:26.502 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:26.502 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:26.502 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:26.502 11:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:26.502 11:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:26.761 11:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:26.761 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:26.761 { 01:09:26.761 "cntlid": 81, 01:09:26.761 "qid": 0, 01:09:26.761 "state": "enabled", 01:09:26.761 "thread": "nvmf_tgt_poll_group_000", 01:09:26.761 "listen_address": { 01:09:26.761 "trtype": "TCP", 01:09:26.761 "adrfam": "IPv4", 01:09:26.761 "traddr": "10.0.0.2", 01:09:26.761 "trsvcid": "4420" 01:09:26.761 }, 01:09:26.761 "peer_address": { 01:09:26.761 "trtype": "TCP", 01:09:26.761 "adrfam": "IPv4", 01:09:26.761 "traddr": "10.0.0.1", 01:09:26.761 "trsvcid": "34462" 01:09:26.761 }, 01:09:26.761 "auth": { 01:09:26.761 "state": "completed", 01:09:26.761 "digest": "sha384", 01:09:26.761 "dhgroup": "ffdhe6144" 01:09:26.761 } 01:09:26.761 } 01:09:26.761 ]' 01:09:26.761 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:26.761 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:09:26.761 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:26.761 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:09:26.761 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:26.761 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:26.761 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:26.761 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:27.086 11:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:00:YzEwNGEwNGZlMzFkMzY2M2Y5ZWM2OGQwMzcwNDRjNWYyYWVlODA4Y2ZkMzRjNGNljfoT4w==: --dhchap-ctrl-secret DHHC-1:03:N2ZkYjlhODkyYzA5MTgzMzVkMDZiODUwMWVlZDFjZDgxMGNhZGQzOTViMjdmYTYyOTU0ZGFjMjU5NDNiNDQ4YiUHx6Y=: 01:09:27.684 11:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:27.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:27.684 11:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:27.684 11:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:27.684 11:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:27.684 11:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:27.684 11:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:27.684 11:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:09:27.684 11:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:09:27.684 11:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 01:09:27.684 11:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:27.684 11:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:09:27.684 11:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 01:09:27.684 11:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:09:27.684 11:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:27.684 11:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:09:27.684 11:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:27.684 11:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:27.946 11:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:27.946 11:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:09:27.946 11:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:09:28.206 01:09:28.206 11:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:28.206 11:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:28.206 11:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:28.464 11:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:28.464 11:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:28.464 11:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:28.464 11:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:28.464 11:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:28.464 11:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:28.464 { 01:09:28.464 "cntlid": 83, 01:09:28.464 "qid": 0, 01:09:28.464 "state": "enabled", 01:09:28.464 "thread": "nvmf_tgt_poll_group_000", 01:09:28.464 "listen_address": { 01:09:28.464 "trtype": "TCP", 01:09:28.464 "adrfam": "IPv4", 01:09:28.464 "traddr": "10.0.0.2", 01:09:28.464 "trsvcid": "4420" 01:09:28.464 }, 01:09:28.464 "peer_address": { 01:09:28.464 "trtype": "TCP", 01:09:28.464 "adrfam": "IPv4", 01:09:28.464 "traddr": "10.0.0.1", 01:09:28.464 "trsvcid": "34486" 01:09:28.464 }, 01:09:28.464 "auth": { 01:09:28.464 "state": "completed", 01:09:28.464 "digest": "sha384", 01:09:28.464 "dhgroup": "ffdhe6144" 01:09:28.464 } 01:09:28.464 } 01:09:28.464 ]' 01:09:28.464 11:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:28.464 11:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:09:28.464 11:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:28.464 11:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:09:28.464 11:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:28.464 11:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:28.464 11:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:28.464 11:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:28.721 11:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:01:YjA1MTY1ZjA4NmFhNDkxZjdhNjRhNjg0N2EyNzFmZDUvk5EB: --dhchap-ctrl-secret DHHC-1:02:ZTAzN2M3OTQzYmFiNzkxOGI1MjhiMzlkZjNjNjliZjRhNTQyODY4MjBlODQxNjdi3CcgFw==: 01:09:29.287 11:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:29.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:29.287 11:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:29.287 11:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:29.287 11:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:29.287 11:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:29.287 11:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:29.287 11:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:09:29.287 11:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:09:29.548 11:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 01:09:29.548 11:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:29.548 11:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:09:29.548 11:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 01:09:29.548 11:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:09:29.548 11:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:29.548 11:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:09:29.548 11:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:29.548 11:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:29.548 11:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:29.548 11:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:09:29.548 11:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:09:30.116 01:09:30.116 11:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:30.116 11:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:30.116 11:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:30.116 11:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:30.116 11:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:30.116 11:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:30.116 11:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:30.116 11:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:30.116 11:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:30.116 { 01:09:30.116 "cntlid": 85, 01:09:30.116 "qid": 0, 01:09:30.116 "state": "enabled", 01:09:30.116 "thread": "nvmf_tgt_poll_group_000", 01:09:30.116 "listen_address": { 01:09:30.116 "trtype": "TCP", 01:09:30.116 "adrfam": "IPv4", 01:09:30.116 "traddr": "10.0.0.2", 01:09:30.116 "trsvcid": "4420" 01:09:30.116 }, 01:09:30.116 "peer_address": { 01:09:30.116 "trtype": "TCP", 01:09:30.116 "adrfam": "IPv4", 01:09:30.116 "traddr": "10.0.0.1", 01:09:30.116 "trsvcid": "59808" 01:09:30.116 }, 01:09:30.116 "auth": { 01:09:30.116 "state": "completed", 01:09:30.116 "digest": "sha384", 01:09:30.116 "dhgroup": "ffdhe6144" 01:09:30.116 } 01:09:30.116 } 01:09:30.116 ]' 01:09:30.116 11:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:30.116 11:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:09:30.376 11:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:30.376 11:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:09:30.376 11:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:30.376 11:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:30.376 11:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:30.376 11:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:30.635 11:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:02:YzJhNjFhNmZhZTU2NmFjOGU4MmU5MmRmMWUyZjUwZjkxOTE4YTY5OWZhN2Y4YzM3UDoDgg==: --dhchap-ctrl-secret DHHC-1:01:MGUyYWQzMDQ5OWQ1NzdmNTIzYzc4NjNlYWJiZDJmZWGEJNYt: 01:09:31.203 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:31.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:31.203 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:31.203 11:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:31.203 11:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:31.203 11:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:31.203 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:31.203 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:09:31.203 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:09:31.203 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 01:09:31.203 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:31.203 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:09:31.203 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 01:09:31.203 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:09:31.203 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:31.203 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key3 01:09:31.203 11:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:31.203 11:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:31.461 11:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:31.461 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:09:31.461 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:09:31.720 01:09:31.720 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:31.720 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:31.720 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:31.978 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:31.978 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:31.978 11:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:31.978 11:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:31.978 11:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:31.978 11:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:31.978 { 01:09:31.978 "cntlid": 87, 01:09:31.978 "qid": 0, 01:09:31.978 "state": "enabled", 01:09:31.978 "thread": "nvmf_tgt_poll_group_000", 01:09:31.978 "listen_address": { 01:09:31.978 "trtype": "TCP", 01:09:31.978 "adrfam": "IPv4", 01:09:31.978 "traddr": "10.0.0.2", 01:09:31.978 "trsvcid": "4420" 01:09:31.978 }, 01:09:31.978 "peer_address": { 01:09:31.978 "trtype": "TCP", 01:09:31.978 "adrfam": "IPv4", 01:09:31.978 "traddr": "10.0.0.1", 01:09:31.978 "trsvcid": "59832" 01:09:31.978 }, 01:09:31.978 "auth": { 01:09:31.978 "state": "completed", 01:09:31.978 "digest": "sha384", 01:09:31.978 "dhgroup": "ffdhe6144" 01:09:31.978 } 01:09:31.978 } 01:09:31.978 ]' 01:09:31.978 11:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:31.978 11:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:09:31.978 11:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:31.978 11:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:09:31.978 11:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:31.978 11:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:31.978 11:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:31.978 11:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:32.236 11:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:03:MDZhYWJjYWJmZWM0Y2ZmOWViOTFlY2VhOGVkY2UzODg0MTk5NTc5ZTgwODUzYmI4MTY4MmQwYWYxZDUwMDVhZDpm5s4=: 01:09:32.803 11:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:32.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:32.803 11:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:32.803 11:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:32.803 11:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:32.803 11:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:32.803 11:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:09:32.803 11:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:32.803 11:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:09:32.803 11:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:09:33.063 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 01:09:33.063 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:33.063 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:09:33.063 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 01:09:33.063 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:09:33.063 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:33.063 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:09:33.063 11:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:33.063 11:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:33.063 11:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:33.063 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:09:33.063 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:09:33.630 01:09:33.630 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:33.630 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:33.630 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:33.917 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:33.917 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:33.917 11:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:33.917 11:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:33.917 11:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:33.917 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:33.917 { 01:09:33.917 "cntlid": 89, 01:09:33.917 "qid": 0, 01:09:33.917 "state": "enabled", 01:09:33.917 "thread": "nvmf_tgt_poll_group_000", 01:09:33.917 "listen_address": { 01:09:33.917 "trtype": "TCP", 01:09:33.917 "adrfam": "IPv4", 01:09:33.917 "traddr": "10.0.0.2", 01:09:33.917 "trsvcid": "4420" 01:09:33.917 }, 01:09:33.917 "peer_address": { 01:09:33.917 "trtype": "TCP", 01:09:33.917 "adrfam": "IPv4", 01:09:33.917 "traddr": "10.0.0.1", 01:09:33.917 "trsvcid": "59850" 01:09:33.917 }, 01:09:33.917 "auth": { 01:09:33.917 "state": "completed", 01:09:33.917 "digest": "sha384", 01:09:33.917 "dhgroup": "ffdhe8192" 01:09:33.917 } 01:09:33.917 } 01:09:33.917 ]' 01:09:33.917 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:33.917 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:09:33.917 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:33.917 11:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:09:33.917 11:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:33.917 11:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:33.917 11:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:33.917 11:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:34.212 11:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:00:YzEwNGEwNGZlMzFkMzY2M2Y5ZWM2OGQwMzcwNDRjNWYyYWVlODA4Y2ZkMzRjNGNljfoT4w==: --dhchap-ctrl-secret DHHC-1:03:N2ZkYjlhODkyYzA5MTgzMzVkMDZiODUwMWVlZDFjZDgxMGNhZGQzOTViMjdmYTYyOTU0ZGFjMjU5NDNiNDQ4YiUHx6Y=: 01:09:34.779 11:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:34.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:34.779 11:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:34.779 11:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:34.779 11:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:34.779 11:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:34.779 11:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:34.779 11:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:09:34.779 11:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:09:35.036 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 01:09:35.036 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:35.037 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:09:35.037 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 01:09:35.037 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:09:35.037 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:35.037 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:09:35.037 11:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:35.037 11:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:35.037 11:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:35.037 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:09:35.037 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:09:35.604 01:09:35.604 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:35.604 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:35.604 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:35.604 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:35.604 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:35.604 11:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:35.604 11:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:35.863 11:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:35.863 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:35.863 { 01:09:35.863 "cntlid": 91, 01:09:35.863 "qid": 0, 01:09:35.863 "state": "enabled", 01:09:35.863 "thread": "nvmf_tgt_poll_group_000", 01:09:35.863 "listen_address": { 01:09:35.863 "trtype": "TCP", 01:09:35.863 "adrfam": "IPv4", 01:09:35.863 "traddr": "10.0.0.2", 01:09:35.863 "trsvcid": "4420" 01:09:35.863 }, 01:09:35.863 "peer_address": { 01:09:35.863 "trtype": "TCP", 01:09:35.863 "adrfam": "IPv4", 01:09:35.863 "traddr": "10.0.0.1", 01:09:35.863 "trsvcid": "59878" 01:09:35.863 }, 01:09:35.863 "auth": { 01:09:35.863 "state": "completed", 01:09:35.863 "digest": "sha384", 01:09:35.863 "dhgroup": "ffdhe8192" 01:09:35.863 } 01:09:35.863 } 01:09:35.863 ]' 01:09:35.863 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:35.863 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:09:35.863 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:35.863 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:09:35.863 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:35.863 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:35.863 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:35.863 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:36.121 11:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:01:YjA1MTY1ZjA4NmFhNDkxZjdhNjRhNjg0N2EyNzFmZDUvk5EB: --dhchap-ctrl-secret DHHC-1:02:ZTAzN2M3OTQzYmFiNzkxOGI1MjhiMzlkZjNjNjliZjRhNTQyODY4MjBlODQxNjdi3CcgFw==: 01:09:36.687 11:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:36.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:36.687 11:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:36.687 11:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:36.687 11:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:36.687 11:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:36.687 11:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:36.687 11:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:09:36.687 11:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:09:36.946 11:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 01:09:36.946 11:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:36.946 11:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:09:36.946 11:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 01:09:36.946 11:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:09:36.946 11:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:36.946 11:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:09:36.946 11:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:36.946 11:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:36.946 11:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:36.946 11:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:09:36.946 11:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:09:37.514 01:09:37.514 11:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:37.514 11:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:37.514 11:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:37.514 11:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:37.514 11:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:37.514 11:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:37.514 11:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:37.514 11:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:37.514 11:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:37.514 { 01:09:37.514 "cntlid": 93, 01:09:37.514 "qid": 0, 01:09:37.514 "state": "enabled", 01:09:37.514 "thread": "nvmf_tgt_poll_group_000", 01:09:37.514 "listen_address": { 01:09:37.514 "trtype": "TCP", 01:09:37.514 "adrfam": "IPv4", 01:09:37.514 "traddr": "10.0.0.2", 01:09:37.514 "trsvcid": "4420" 01:09:37.514 }, 01:09:37.514 "peer_address": { 01:09:37.514 "trtype": "TCP", 01:09:37.514 "adrfam": "IPv4", 01:09:37.514 "traddr": "10.0.0.1", 01:09:37.514 "trsvcid": "59898" 01:09:37.514 }, 01:09:37.514 "auth": { 01:09:37.514 "state": "completed", 01:09:37.514 "digest": "sha384", 01:09:37.514 "dhgroup": "ffdhe8192" 01:09:37.514 } 01:09:37.514 } 01:09:37.514 ]' 01:09:37.514 11:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:37.514 11:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:09:37.772 11:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:37.772 11:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:09:37.772 11:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:37.772 11:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:37.772 11:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:37.772 11:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:38.030 11:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:02:YzJhNjFhNmZhZTU2NmFjOGU4MmU5MmRmMWUyZjUwZjkxOTE4YTY5OWZhN2Y4YzM3UDoDgg==: --dhchap-ctrl-secret DHHC-1:01:MGUyYWQzMDQ5OWQ1NzdmNTIzYzc4NjNlYWJiZDJmZWGEJNYt: 01:09:38.596 11:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:38.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:38.596 11:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:38.596 11:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:38.596 11:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:38.596 11:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:38.596 11:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:38.596 11:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:09:38.596 11:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:09:38.596 11:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 01:09:38.596 11:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:38.596 11:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:09:38.596 11:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 01:09:38.596 11:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:09:38.596 11:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:38.596 11:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key3 01:09:38.596 11:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:38.596 11:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:38.596 11:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:38.596 11:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:09:38.596 11:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:09:39.162 01:09:39.162 11:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:39.162 11:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:39.162 11:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:39.421 11:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:39.421 11:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:39.421 11:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:39.421 11:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:39.421 11:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:39.421 11:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:39.421 { 01:09:39.421 "cntlid": 95, 01:09:39.421 "qid": 0, 01:09:39.421 "state": "enabled", 01:09:39.421 "thread": "nvmf_tgt_poll_group_000", 01:09:39.421 "listen_address": { 01:09:39.421 "trtype": "TCP", 01:09:39.421 "adrfam": "IPv4", 01:09:39.421 "traddr": "10.0.0.2", 01:09:39.421 "trsvcid": "4420" 01:09:39.421 }, 01:09:39.421 "peer_address": { 01:09:39.421 "trtype": "TCP", 01:09:39.421 "adrfam": "IPv4", 01:09:39.421 "traddr": "10.0.0.1", 01:09:39.421 "trsvcid": "59932" 01:09:39.421 }, 01:09:39.421 "auth": { 01:09:39.421 "state": "completed", 01:09:39.421 "digest": "sha384", 01:09:39.421 "dhgroup": "ffdhe8192" 01:09:39.421 } 01:09:39.421 } 01:09:39.421 ]' 01:09:39.421 11:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:39.421 11:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:09:39.421 11:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:39.421 11:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:09:39.421 11:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:39.679 11:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:39.679 11:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:39.679 11:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:39.937 11:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:03:MDZhYWJjYWJmZWM0Y2ZmOWViOTFlY2VhOGVkY2UzODg0MTk5NTc5ZTgwODUzYmI4MTY4MmQwYWYxZDUwMDVhZDpm5s4=: 01:09:40.503 11:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:40.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:40.503 11:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:40.503 11:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:40.503 11:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:40.503 11:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:40.503 11:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 01:09:40.503 11:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:09:40.503 11:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:40.503 11:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:09:40.503 11:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:09:40.503 11:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 01:09:40.503 11:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:40.503 11:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:09:40.503 11:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 01:09:40.503 11:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:09:40.503 11:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:40.503 11:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:09:40.503 11:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:40.503 11:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:40.503 11:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:40.504 11:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:09:40.504 11:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:09:40.761 01:09:40.761 11:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:40.761 11:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:40.761 11:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:41.018 11:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:41.018 11:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:41.018 11:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:41.018 11:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:41.019 11:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:41.019 11:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:41.019 { 01:09:41.019 "cntlid": 97, 01:09:41.019 "qid": 0, 01:09:41.019 "state": "enabled", 01:09:41.019 "thread": "nvmf_tgt_poll_group_000", 01:09:41.019 "listen_address": { 01:09:41.019 "trtype": "TCP", 01:09:41.019 "adrfam": "IPv4", 01:09:41.019 "traddr": "10.0.0.2", 01:09:41.019 "trsvcid": "4420" 01:09:41.019 }, 01:09:41.019 "peer_address": { 01:09:41.019 "trtype": "TCP", 01:09:41.019 "adrfam": "IPv4", 01:09:41.019 "traddr": "10.0.0.1", 01:09:41.019 "trsvcid": "50686" 01:09:41.019 }, 01:09:41.019 "auth": { 01:09:41.019 "state": "completed", 01:09:41.019 "digest": "sha512", 01:09:41.019 "dhgroup": "null" 01:09:41.019 } 01:09:41.019 } 01:09:41.019 ]' 01:09:41.019 11:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:41.276 11:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:09:41.276 11:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:41.276 11:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 01:09:41.276 11:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:41.276 11:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:41.276 11:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:41.276 11:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:41.534 11:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:00:YzEwNGEwNGZlMzFkMzY2M2Y5ZWM2OGQwMzcwNDRjNWYyYWVlODA4Y2ZkMzRjNGNljfoT4w==: --dhchap-ctrl-secret DHHC-1:03:N2ZkYjlhODkyYzA5MTgzMzVkMDZiODUwMWVlZDFjZDgxMGNhZGQzOTViMjdmYTYyOTU0ZGFjMjU5NDNiNDQ4YiUHx6Y=: 01:09:42.096 11:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:42.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:42.096 11:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:42.096 11:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:42.096 11:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:42.096 11:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:42.096 11:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:42.096 11:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:09:42.096 11:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:09:42.352 11:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 01:09:42.352 11:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:42.352 11:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:09:42.352 11:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 01:09:42.352 11:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:09:42.352 11:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:42.352 11:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:09:42.352 11:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:42.352 11:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:42.352 11:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:42.352 11:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:09:42.352 11:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:09:42.609 01:09:42.609 11:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:42.609 11:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:42.609 11:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:42.920 11:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:42.920 11:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:42.920 11:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:42.920 11:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:42.920 11:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:42.920 11:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:42.920 { 01:09:42.920 "cntlid": 99, 01:09:42.920 "qid": 0, 01:09:42.920 "state": "enabled", 01:09:42.920 "thread": "nvmf_tgt_poll_group_000", 01:09:42.920 "listen_address": { 01:09:42.920 "trtype": "TCP", 01:09:42.920 "adrfam": "IPv4", 01:09:42.920 "traddr": "10.0.0.2", 01:09:42.920 "trsvcid": "4420" 01:09:42.920 }, 01:09:42.920 "peer_address": { 01:09:42.920 "trtype": "TCP", 01:09:42.920 "adrfam": "IPv4", 01:09:42.920 "traddr": "10.0.0.1", 01:09:42.920 "trsvcid": "50728" 01:09:42.920 }, 01:09:42.920 "auth": { 01:09:42.920 "state": "completed", 01:09:42.920 "digest": "sha512", 01:09:42.920 "dhgroup": "null" 01:09:42.920 } 01:09:42.920 } 01:09:42.920 ]' 01:09:42.920 11:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:42.920 11:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:09:42.920 11:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:42.920 11:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 01:09:42.920 11:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:42.920 11:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:42.920 11:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:42.920 11:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:43.179 11:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:01:YjA1MTY1ZjA4NmFhNDkxZjdhNjRhNjg0N2EyNzFmZDUvk5EB: --dhchap-ctrl-secret DHHC-1:02:ZTAzN2M3OTQzYmFiNzkxOGI1MjhiMzlkZjNjNjliZjRhNTQyODY4MjBlODQxNjdi3CcgFw==: 01:09:43.744 11:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:43.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:43.744 11:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:43.744 11:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:43.744 11:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:43.744 11:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:43.744 11:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:43.744 11:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:09:43.744 11:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:09:44.002 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 01:09:44.002 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:44.002 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:09:44.002 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 01:09:44.002 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:09:44.002 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:44.002 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:09:44.002 11:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:44.002 11:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:44.002 11:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:44.002 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:09:44.002 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:09:44.260 01:09:44.260 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:44.260 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:44.260 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:44.519 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:44.519 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:44.519 11:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:44.519 11:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:44.519 11:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:44.519 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:44.519 { 01:09:44.519 "cntlid": 101, 01:09:44.519 "qid": 0, 01:09:44.519 "state": "enabled", 01:09:44.519 "thread": "nvmf_tgt_poll_group_000", 01:09:44.519 "listen_address": { 01:09:44.519 "trtype": "TCP", 01:09:44.519 "adrfam": "IPv4", 01:09:44.519 "traddr": "10.0.0.2", 01:09:44.519 "trsvcid": "4420" 01:09:44.519 }, 01:09:44.519 "peer_address": { 01:09:44.519 "trtype": "TCP", 01:09:44.519 "adrfam": "IPv4", 01:09:44.519 "traddr": "10.0.0.1", 01:09:44.519 "trsvcid": "50764" 01:09:44.519 }, 01:09:44.519 "auth": { 01:09:44.519 "state": "completed", 01:09:44.519 "digest": "sha512", 01:09:44.519 "dhgroup": "null" 01:09:44.519 } 01:09:44.519 } 01:09:44.519 ]' 01:09:44.519 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:44.519 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:09:44.519 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:44.519 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 01:09:44.519 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:44.519 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:44.519 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:44.519 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:44.792 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:02:YzJhNjFhNmZhZTU2NmFjOGU4MmU5MmRmMWUyZjUwZjkxOTE4YTY5OWZhN2Y4YzM3UDoDgg==: --dhchap-ctrl-secret DHHC-1:01:MGUyYWQzMDQ5OWQ1NzdmNTIzYzc4NjNlYWJiZDJmZWGEJNYt: 01:09:45.358 11:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:45.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:45.358 11:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:45.358 11:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:45.358 11:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:45.358 11:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:45.358 11:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:45.358 11:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:09:45.358 11:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:09:45.615 11:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 01:09:45.615 11:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:45.615 11:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:09:45.615 11:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 01:09:45.615 11:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:09:45.615 11:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:45.615 11:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key3 01:09:45.615 11:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:45.615 11:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:45.615 11:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:45.615 11:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:09:45.615 11:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:09:45.873 01:09:45.873 11:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:45.873 11:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:45.873 11:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:46.131 11:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:46.131 11:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:46.131 11:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:46.131 11:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:46.131 11:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:46.131 11:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:46.131 { 01:09:46.131 "cntlid": 103, 01:09:46.131 "qid": 0, 01:09:46.131 "state": "enabled", 01:09:46.131 "thread": "nvmf_tgt_poll_group_000", 01:09:46.131 "listen_address": { 01:09:46.131 "trtype": "TCP", 01:09:46.131 "adrfam": "IPv4", 01:09:46.131 "traddr": "10.0.0.2", 01:09:46.131 "trsvcid": "4420" 01:09:46.131 }, 01:09:46.131 "peer_address": { 01:09:46.131 "trtype": "TCP", 01:09:46.131 "adrfam": "IPv4", 01:09:46.131 "traddr": "10.0.0.1", 01:09:46.131 "trsvcid": "50790" 01:09:46.131 }, 01:09:46.131 "auth": { 01:09:46.131 "state": "completed", 01:09:46.131 "digest": "sha512", 01:09:46.131 "dhgroup": "null" 01:09:46.131 } 01:09:46.131 } 01:09:46.131 ]' 01:09:46.131 11:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:46.131 11:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:09:46.131 11:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:46.388 11:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 01:09:46.388 11:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:46.388 11:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:46.388 11:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:46.388 11:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:46.644 11:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:03:MDZhYWJjYWJmZWM0Y2ZmOWViOTFlY2VhOGVkY2UzODg0MTk5NTc5ZTgwODUzYmI4MTY4MmQwYWYxZDUwMDVhZDpm5s4=: 01:09:47.208 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:47.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:47.208 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:47.208 11:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:47.208 11:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:47.208 11:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:47.208 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:09:47.208 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:47.208 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:09:47.208 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:09:47.208 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 01:09:47.208 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:47.208 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:09:47.208 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 01:09:47.208 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:09:47.208 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:47.208 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:09:47.208 11:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:47.208 11:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:47.208 11:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:47.208 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:09:47.208 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:09:47.773 01:09:47.773 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:47.773 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:47.773 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:47.773 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:47.773 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:47.773 11:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:47.773 11:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:47.773 11:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:47.773 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:47.773 { 01:09:47.773 "cntlid": 105, 01:09:47.773 "qid": 0, 01:09:47.773 "state": "enabled", 01:09:47.773 "thread": "nvmf_tgt_poll_group_000", 01:09:47.773 "listen_address": { 01:09:47.773 "trtype": "TCP", 01:09:47.773 "adrfam": "IPv4", 01:09:47.773 "traddr": "10.0.0.2", 01:09:47.773 "trsvcid": "4420" 01:09:47.773 }, 01:09:47.773 "peer_address": { 01:09:47.773 "trtype": "TCP", 01:09:47.773 "adrfam": "IPv4", 01:09:47.773 "traddr": "10.0.0.1", 01:09:47.773 "trsvcid": "50816" 01:09:47.773 }, 01:09:47.773 "auth": { 01:09:47.773 "state": "completed", 01:09:47.773 "digest": "sha512", 01:09:47.773 "dhgroup": "ffdhe2048" 01:09:47.774 } 01:09:47.774 } 01:09:47.774 ]' 01:09:47.774 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:48.033 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:09:48.034 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:48.034 11:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:09:48.034 11:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:48.034 11:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:48.034 11:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:48.034 11:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:48.291 11:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:00:YzEwNGEwNGZlMzFkMzY2M2Y5ZWM2OGQwMzcwNDRjNWYyYWVlODA4Y2ZkMzRjNGNljfoT4w==: --dhchap-ctrl-secret DHHC-1:03:N2ZkYjlhODkyYzA5MTgzMzVkMDZiODUwMWVlZDFjZDgxMGNhZGQzOTViMjdmYTYyOTU0ZGFjMjU5NDNiNDQ4YiUHx6Y=: 01:09:48.858 11:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:48.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:48.858 11:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:48.858 11:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:48.858 11:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:48.858 11:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:48.858 11:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:48.858 11:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:09:48.858 11:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:09:48.858 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 01:09:48.858 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:48.858 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:09:48.858 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 01:09:48.858 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:09:48.858 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:48.858 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:09:48.858 11:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:48.858 11:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:49.116 11:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:49.116 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:09:49.116 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:09:49.374 01:09:49.374 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:49.374 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:49.374 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:49.633 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:49.633 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:49.633 11:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:49.633 11:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:49.633 11:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:49.633 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:49.633 { 01:09:49.633 "cntlid": 107, 01:09:49.633 "qid": 0, 01:09:49.633 "state": "enabled", 01:09:49.633 "thread": "nvmf_tgt_poll_group_000", 01:09:49.633 "listen_address": { 01:09:49.633 "trtype": "TCP", 01:09:49.633 "adrfam": "IPv4", 01:09:49.633 "traddr": "10.0.0.2", 01:09:49.633 "trsvcid": "4420" 01:09:49.633 }, 01:09:49.633 "peer_address": { 01:09:49.633 "trtype": "TCP", 01:09:49.633 "adrfam": "IPv4", 01:09:49.633 "traddr": "10.0.0.1", 01:09:49.633 "trsvcid": "56302" 01:09:49.633 }, 01:09:49.633 "auth": { 01:09:49.633 "state": "completed", 01:09:49.633 "digest": "sha512", 01:09:49.633 "dhgroup": "ffdhe2048" 01:09:49.633 } 01:09:49.633 } 01:09:49.633 ]' 01:09:49.633 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:49.633 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:09:49.633 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:49.633 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:09:49.633 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:49.633 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:49.633 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:49.633 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:49.892 11:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:01:YjA1MTY1ZjA4NmFhNDkxZjdhNjRhNjg0N2EyNzFmZDUvk5EB: --dhchap-ctrl-secret DHHC-1:02:ZTAzN2M3OTQzYmFiNzkxOGI1MjhiMzlkZjNjNjliZjRhNTQyODY4MjBlODQxNjdi3CcgFw==: 01:09:50.458 11:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:50.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:50.458 11:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:50.458 11:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:50.458 11:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:50.458 11:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:50.458 11:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:50.458 11:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:09:50.458 11:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:09:50.715 11:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 01:09:50.715 11:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:50.715 11:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:09:50.715 11:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 01:09:50.715 11:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:09:50.715 11:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:50.715 11:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:09:50.715 11:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:50.715 11:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:50.716 11:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:50.716 11:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:09:50.716 11:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:09:50.974 01:09:50.974 11:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:50.974 11:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:50.974 11:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:51.231 11:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:51.231 11:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:51.231 11:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:51.231 11:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:51.231 11:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:51.231 11:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:51.231 { 01:09:51.231 "cntlid": 109, 01:09:51.231 "qid": 0, 01:09:51.231 "state": "enabled", 01:09:51.231 "thread": "nvmf_tgt_poll_group_000", 01:09:51.231 "listen_address": { 01:09:51.231 "trtype": "TCP", 01:09:51.231 "adrfam": "IPv4", 01:09:51.231 "traddr": "10.0.0.2", 01:09:51.231 "trsvcid": "4420" 01:09:51.231 }, 01:09:51.231 "peer_address": { 01:09:51.231 "trtype": "TCP", 01:09:51.231 "adrfam": "IPv4", 01:09:51.231 "traddr": "10.0.0.1", 01:09:51.231 "trsvcid": "56340" 01:09:51.231 }, 01:09:51.231 "auth": { 01:09:51.231 "state": "completed", 01:09:51.231 "digest": "sha512", 01:09:51.231 "dhgroup": "ffdhe2048" 01:09:51.231 } 01:09:51.231 } 01:09:51.231 ]' 01:09:51.231 11:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:51.231 11:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:09:51.231 11:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:51.488 11:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:09:51.488 11:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:51.488 11:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:51.488 11:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:51.488 11:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:51.746 11:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:02:YzJhNjFhNmZhZTU2NmFjOGU4MmU5MmRmMWUyZjUwZjkxOTE4YTY5OWZhN2Y4YzM3UDoDgg==: --dhchap-ctrl-secret DHHC-1:01:MGUyYWQzMDQ5OWQ1NzdmNTIzYzc4NjNlYWJiZDJmZWGEJNYt: 01:09:52.313 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:52.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:52.313 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:52.313 11:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:52.313 11:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:52.313 11:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:52.313 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:52.313 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:09:52.313 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:09:52.572 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 01:09:52.572 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:52.572 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:09:52.572 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 01:09:52.572 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:09:52.572 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:52.572 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key3 01:09:52.572 11:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:52.572 11:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:52.572 11:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:52.572 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:09:52.572 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:09:52.831 01:09:52.831 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:52.831 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:52.831 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:52.831 11:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:52.831 11:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:52.831 11:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:52.831 11:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:53.089 11:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:53.089 11:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:53.089 { 01:09:53.089 "cntlid": 111, 01:09:53.089 "qid": 0, 01:09:53.089 "state": "enabled", 01:09:53.089 "thread": "nvmf_tgt_poll_group_000", 01:09:53.089 "listen_address": { 01:09:53.089 "trtype": "TCP", 01:09:53.089 "adrfam": "IPv4", 01:09:53.089 "traddr": "10.0.0.2", 01:09:53.089 "trsvcid": "4420" 01:09:53.089 }, 01:09:53.089 "peer_address": { 01:09:53.089 "trtype": "TCP", 01:09:53.089 "adrfam": "IPv4", 01:09:53.089 "traddr": "10.0.0.1", 01:09:53.089 "trsvcid": "56376" 01:09:53.089 }, 01:09:53.089 "auth": { 01:09:53.089 "state": "completed", 01:09:53.090 "digest": "sha512", 01:09:53.090 "dhgroup": "ffdhe2048" 01:09:53.090 } 01:09:53.090 } 01:09:53.090 ]' 01:09:53.090 11:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:53.090 11:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:09:53.090 11:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:53.090 11:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:09:53.090 11:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:53.090 11:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:53.090 11:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:53.090 11:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:53.348 11:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:03:MDZhYWJjYWJmZWM0Y2ZmOWViOTFlY2VhOGVkY2UzODg0MTk5NTc5ZTgwODUzYmI4MTY4MmQwYWYxZDUwMDVhZDpm5s4=: 01:09:53.913 11:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:53.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:53.913 11:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:53.913 11:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:53.913 11:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:53.913 11:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:53.913 11:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:09:53.913 11:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:53.913 11:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:09:53.913 11:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:09:54.171 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 01:09:54.171 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:54.171 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:09:54.171 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 01:09:54.171 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:09:54.171 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:54.171 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:09:54.171 11:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:54.171 11:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:54.171 11:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:54.171 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:09:54.171 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:09:54.429 01:09:54.429 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:54.429 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:54.429 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:54.687 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:54.687 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:54.687 11:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:54.688 11:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:54.688 11:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:54.688 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:54.688 { 01:09:54.688 "cntlid": 113, 01:09:54.688 "qid": 0, 01:09:54.688 "state": "enabled", 01:09:54.688 "thread": "nvmf_tgt_poll_group_000", 01:09:54.688 "listen_address": { 01:09:54.688 "trtype": "TCP", 01:09:54.688 "adrfam": "IPv4", 01:09:54.688 "traddr": "10.0.0.2", 01:09:54.688 "trsvcid": "4420" 01:09:54.688 }, 01:09:54.688 "peer_address": { 01:09:54.688 "trtype": "TCP", 01:09:54.688 "adrfam": "IPv4", 01:09:54.688 "traddr": "10.0.0.1", 01:09:54.688 "trsvcid": "56410" 01:09:54.688 }, 01:09:54.688 "auth": { 01:09:54.688 "state": "completed", 01:09:54.688 "digest": "sha512", 01:09:54.688 "dhgroup": "ffdhe3072" 01:09:54.688 } 01:09:54.688 } 01:09:54.688 ]' 01:09:54.688 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:54.688 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:09:54.688 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:54.688 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:09:54.688 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:54.688 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:54.688 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:54.688 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:54.946 11:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:00:YzEwNGEwNGZlMzFkMzY2M2Y5ZWM2OGQwMzcwNDRjNWYyYWVlODA4Y2ZkMzRjNGNljfoT4w==: --dhchap-ctrl-secret DHHC-1:03:N2ZkYjlhODkyYzA5MTgzMzVkMDZiODUwMWVlZDFjZDgxMGNhZGQzOTViMjdmYTYyOTU0ZGFjMjU5NDNiNDQ4YiUHx6Y=: 01:09:55.509 11:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:55.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:55.509 11:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:55.509 11:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:55.509 11:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:55.509 11:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:55.509 11:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:55.509 11:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:09:55.509 11:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:09:55.774 11:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 01:09:55.774 11:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:55.774 11:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:09:55.774 11:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 01:09:55.774 11:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:09:55.774 11:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:55.774 11:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:09:55.774 11:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:55.774 11:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:55.774 11:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:55.774 11:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:09:55.774 11:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:09:56.043 01:09:56.043 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:56.043 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:56.043 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:56.301 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:56.301 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:56.301 11:07:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:56.301 11:07:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:56.301 11:07:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:56.301 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:56.301 { 01:09:56.301 "cntlid": 115, 01:09:56.301 "qid": 0, 01:09:56.301 "state": "enabled", 01:09:56.301 "thread": "nvmf_tgt_poll_group_000", 01:09:56.301 "listen_address": { 01:09:56.301 "trtype": "TCP", 01:09:56.301 "adrfam": "IPv4", 01:09:56.301 "traddr": "10.0.0.2", 01:09:56.301 "trsvcid": "4420" 01:09:56.301 }, 01:09:56.301 "peer_address": { 01:09:56.301 "trtype": "TCP", 01:09:56.301 "adrfam": "IPv4", 01:09:56.301 "traddr": "10.0.0.1", 01:09:56.301 "trsvcid": "56424" 01:09:56.301 }, 01:09:56.301 "auth": { 01:09:56.301 "state": "completed", 01:09:56.301 "digest": "sha512", 01:09:56.301 "dhgroup": "ffdhe3072" 01:09:56.301 } 01:09:56.301 } 01:09:56.301 ]' 01:09:56.301 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:56.559 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:09:56.559 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:56.559 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:09:56.559 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:56.559 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:56.559 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:56.559 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:56.818 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:01:YjA1MTY1ZjA4NmFhNDkxZjdhNjRhNjg0N2EyNzFmZDUvk5EB: --dhchap-ctrl-secret DHHC-1:02:ZTAzN2M3OTQzYmFiNzkxOGI1MjhiMzlkZjNjNjliZjRhNTQyODY4MjBlODQxNjdi3CcgFw==: 01:09:57.385 11:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:57.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:57.385 11:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:57.385 11:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:57.385 11:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:57.385 11:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:57.385 11:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:57.385 11:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:09:57.385 11:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:09:57.645 11:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 01:09:57.645 11:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:57.645 11:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:09:57.645 11:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 01:09:57.645 11:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:09:57.645 11:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:57.645 11:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:09:57.645 11:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:57.645 11:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:57.645 11:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:57.645 11:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:09:57.645 11:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:09:57.903 01:09:57.903 11:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:57.903 11:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:57.903 11:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:58.161 11:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:58.161 11:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:58.161 11:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:58.161 11:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:58.161 11:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:58.161 11:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:58.161 { 01:09:58.161 "cntlid": 117, 01:09:58.161 "qid": 0, 01:09:58.161 "state": "enabled", 01:09:58.161 "thread": "nvmf_tgt_poll_group_000", 01:09:58.161 "listen_address": { 01:09:58.161 "trtype": "TCP", 01:09:58.161 "adrfam": "IPv4", 01:09:58.161 "traddr": "10.0.0.2", 01:09:58.161 "trsvcid": "4420" 01:09:58.161 }, 01:09:58.161 "peer_address": { 01:09:58.161 "trtype": "TCP", 01:09:58.161 "adrfam": "IPv4", 01:09:58.161 "traddr": "10.0.0.1", 01:09:58.161 "trsvcid": "56452" 01:09:58.161 }, 01:09:58.161 "auth": { 01:09:58.161 "state": "completed", 01:09:58.161 "digest": "sha512", 01:09:58.161 "dhgroup": "ffdhe3072" 01:09:58.161 } 01:09:58.161 } 01:09:58.161 ]' 01:09:58.161 11:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:58.161 11:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:09:58.161 11:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:09:58.161 11:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:09:58.161 11:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:09:58.161 11:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:09:58.161 11:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:09:58.161 11:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:09:58.419 11:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:02:YzJhNjFhNmZhZTU2NmFjOGU4MmU5MmRmMWUyZjUwZjkxOTE4YTY5OWZhN2Y4YzM3UDoDgg==: --dhchap-ctrl-secret DHHC-1:01:MGUyYWQzMDQ5OWQ1NzdmNTIzYzc4NjNlYWJiZDJmZWGEJNYt: 01:09:58.983 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:09:58.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:09:58.983 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:09:58.983 11:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:58.983 11:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:59.239 11:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:59.239 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:09:59.240 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:09:59.240 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:09:59.240 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 01:09:59.240 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:09:59.240 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:09:59.240 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 01:09:59.240 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:09:59.240 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:09:59.240 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key3 01:09:59.240 11:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:59.240 11:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:59.240 11:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:59.240 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:09:59.240 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:09:59.804 01:09:59.804 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:09:59.804 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:09:59.804 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:09:59.804 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:59.804 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:09:59.804 11:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:59.804 11:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:09:59.804 11:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:59.804 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:09:59.804 { 01:09:59.804 "cntlid": 119, 01:09:59.804 "qid": 0, 01:09:59.804 "state": "enabled", 01:09:59.804 "thread": "nvmf_tgt_poll_group_000", 01:09:59.804 "listen_address": { 01:09:59.804 "trtype": "TCP", 01:09:59.804 "adrfam": "IPv4", 01:09:59.804 "traddr": "10.0.0.2", 01:09:59.804 "trsvcid": "4420" 01:09:59.804 }, 01:09:59.804 "peer_address": { 01:09:59.804 "trtype": "TCP", 01:09:59.804 "adrfam": "IPv4", 01:09:59.804 "traddr": "10.0.0.1", 01:09:59.804 "trsvcid": "51340" 01:09:59.804 }, 01:09:59.804 "auth": { 01:09:59.804 "state": "completed", 01:09:59.804 "digest": "sha512", 01:09:59.804 "dhgroup": "ffdhe3072" 01:09:59.804 } 01:09:59.804 } 01:09:59.804 ]' 01:09:59.804 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:09:59.804 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:09:59.804 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:10:00.061 11:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:10:00.061 11:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:10:00.061 11:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:10:00.061 11:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:10:00.061 11:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:10:00.319 11:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:03:MDZhYWJjYWJmZWM0Y2ZmOWViOTFlY2VhOGVkY2UzODg0MTk5NTc5ZTgwODUzYmI4MTY4MmQwYWYxZDUwMDVhZDpm5s4=: 01:10:00.886 11:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:10:00.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:10:00.886 11:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:10:00.886 11:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:00.886 11:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:00.886 11:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:00.886 11:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:10:00.886 11:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:10:00.886 11:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:10:00.886 11:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:10:01.144 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 01:10:01.144 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:10:01.144 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:10:01.144 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 01:10:01.144 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:10:01.144 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:10:01.144 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:10:01.144 11:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:01.144 11:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:01.144 11:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:01.144 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:10:01.144 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:10:01.403 01:10:01.403 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:10:01.403 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:10:01.403 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:10:01.662 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:10:01.662 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:10:01.662 11:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:01.662 11:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:01.662 11:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:01.662 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:10:01.662 { 01:10:01.662 "cntlid": 121, 01:10:01.662 "qid": 0, 01:10:01.662 "state": "enabled", 01:10:01.662 "thread": "nvmf_tgt_poll_group_000", 01:10:01.662 "listen_address": { 01:10:01.662 "trtype": "TCP", 01:10:01.662 "adrfam": "IPv4", 01:10:01.662 "traddr": "10.0.0.2", 01:10:01.662 "trsvcid": "4420" 01:10:01.662 }, 01:10:01.662 "peer_address": { 01:10:01.662 "trtype": "TCP", 01:10:01.662 "adrfam": "IPv4", 01:10:01.662 "traddr": "10.0.0.1", 01:10:01.662 "trsvcid": "51358" 01:10:01.662 }, 01:10:01.662 "auth": { 01:10:01.662 "state": "completed", 01:10:01.662 "digest": "sha512", 01:10:01.662 "dhgroup": "ffdhe4096" 01:10:01.662 } 01:10:01.662 } 01:10:01.662 ]' 01:10:01.662 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:10:01.662 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:10:01.662 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:10:01.662 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:10:01.662 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:10:01.662 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:10:01.662 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:10:01.662 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:10:01.920 11:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:00:YzEwNGEwNGZlMzFkMzY2M2Y5ZWM2OGQwMzcwNDRjNWYyYWVlODA4Y2ZkMzRjNGNljfoT4w==: --dhchap-ctrl-secret DHHC-1:03:N2ZkYjlhODkyYzA5MTgzMzVkMDZiODUwMWVlZDFjZDgxMGNhZGQzOTViMjdmYTYyOTU0ZGFjMjU5NDNiNDQ4YiUHx6Y=: 01:10:02.487 11:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:10:02.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:10:02.487 11:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:10:02.487 11:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:02.487 11:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:02.745 11:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:02.745 11:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:10:02.745 11:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:10:02.745 11:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:10:02.745 11:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 01:10:02.745 11:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:10:02.745 11:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:10:02.745 11:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 01:10:02.745 11:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:10:02.745 11:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:10:02.745 11:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:10:02.745 11:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:02.745 11:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:02.745 11:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:02.745 11:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:10:02.745 11:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:10:03.309 01:10:03.309 11:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:10:03.309 11:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:10:03.309 11:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:10:03.309 11:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:10:03.309 11:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:10:03.309 11:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:03.309 11:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:03.309 11:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:03.309 11:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:10:03.309 { 01:10:03.309 "cntlid": 123, 01:10:03.309 "qid": 0, 01:10:03.309 "state": "enabled", 01:10:03.310 "thread": "nvmf_tgt_poll_group_000", 01:10:03.310 "listen_address": { 01:10:03.310 "trtype": "TCP", 01:10:03.310 "adrfam": "IPv4", 01:10:03.310 "traddr": "10.0.0.2", 01:10:03.310 "trsvcid": "4420" 01:10:03.310 }, 01:10:03.310 "peer_address": { 01:10:03.310 "trtype": "TCP", 01:10:03.310 "adrfam": "IPv4", 01:10:03.310 "traddr": "10.0.0.1", 01:10:03.310 "trsvcid": "51400" 01:10:03.310 }, 01:10:03.310 "auth": { 01:10:03.310 "state": "completed", 01:10:03.310 "digest": "sha512", 01:10:03.310 "dhgroup": "ffdhe4096" 01:10:03.310 } 01:10:03.310 } 01:10:03.310 ]' 01:10:03.310 11:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:10:03.567 11:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:10:03.567 11:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:10:03.567 11:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:10:03.567 11:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:10:03.567 11:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:10:03.567 11:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:10:03.567 11:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:10:03.824 11:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:01:YjA1MTY1ZjA4NmFhNDkxZjdhNjRhNjg0N2EyNzFmZDUvk5EB: --dhchap-ctrl-secret DHHC-1:02:ZTAzN2M3OTQzYmFiNzkxOGI1MjhiMzlkZjNjNjliZjRhNTQyODY4MjBlODQxNjdi3CcgFw==: 01:10:04.390 11:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:10:04.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:10:04.390 11:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:10:04.390 11:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:04.390 11:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:04.390 11:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:04.390 11:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:10:04.390 11:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:10:04.390 11:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:10:04.648 11:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 01:10:04.648 11:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:10:04.648 11:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:10:04.648 11:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 01:10:04.648 11:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:10:04.648 11:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:10:04.648 11:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:10:04.648 11:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:04.648 11:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:04.648 11:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:04.648 11:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:10:04.648 11:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:10:04.905 01:10:04.905 11:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:10:04.905 11:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:10:04.905 11:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:10:05.162 11:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:10:05.162 11:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:10:05.162 11:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:05.162 11:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:05.162 11:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:05.162 11:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:10:05.162 { 01:10:05.162 "cntlid": 125, 01:10:05.162 "qid": 0, 01:10:05.162 "state": "enabled", 01:10:05.162 "thread": "nvmf_tgt_poll_group_000", 01:10:05.162 "listen_address": { 01:10:05.162 "trtype": "TCP", 01:10:05.162 "adrfam": "IPv4", 01:10:05.162 "traddr": "10.0.0.2", 01:10:05.162 "trsvcid": "4420" 01:10:05.162 }, 01:10:05.162 "peer_address": { 01:10:05.163 "trtype": "TCP", 01:10:05.163 "adrfam": "IPv4", 01:10:05.163 "traddr": "10.0.0.1", 01:10:05.163 "trsvcid": "51422" 01:10:05.163 }, 01:10:05.163 "auth": { 01:10:05.163 "state": "completed", 01:10:05.163 "digest": "sha512", 01:10:05.163 "dhgroup": "ffdhe4096" 01:10:05.163 } 01:10:05.163 } 01:10:05.163 ]' 01:10:05.163 11:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:10:05.163 11:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:10:05.163 11:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:10:05.163 11:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:10:05.163 11:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:10:05.163 11:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:10:05.163 11:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:10:05.163 11:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:10:05.420 11:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:02:YzJhNjFhNmZhZTU2NmFjOGU4MmU5MmRmMWUyZjUwZjkxOTE4YTY5OWZhN2Y4YzM3UDoDgg==: --dhchap-ctrl-secret DHHC-1:01:MGUyYWQzMDQ5OWQ1NzdmNTIzYzc4NjNlYWJiZDJmZWGEJNYt: 01:10:05.985 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:10:05.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:10:05.985 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:10:05.985 11:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:05.985 11:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:05.985 11:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:05.985 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:10:05.985 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:10:05.985 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:10:06.242 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 01:10:06.242 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:10:06.242 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:10:06.242 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 01:10:06.242 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:10:06.242 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:10:06.242 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key3 01:10:06.242 11:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:06.242 11:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:06.242 11:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:06.242 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:10:06.242 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:10:06.501 01:10:06.501 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:10:06.501 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:10:06.501 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:10:06.759 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:10:06.759 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:10:06.759 11:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:06.759 11:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:06.759 11:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:06.759 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:10:06.759 { 01:10:06.759 "cntlid": 127, 01:10:06.759 "qid": 0, 01:10:06.759 "state": "enabled", 01:10:06.759 "thread": "nvmf_tgt_poll_group_000", 01:10:06.759 "listen_address": { 01:10:06.759 "trtype": "TCP", 01:10:06.759 "adrfam": "IPv4", 01:10:06.759 "traddr": "10.0.0.2", 01:10:06.759 "trsvcid": "4420" 01:10:06.759 }, 01:10:06.759 "peer_address": { 01:10:06.759 "trtype": "TCP", 01:10:06.759 "adrfam": "IPv4", 01:10:06.759 "traddr": "10.0.0.1", 01:10:06.759 "trsvcid": "51436" 01:10:06.759 }, 01:10:06.759 "auth": { 01:10:06.759 "state": "completed", 01:10:06.759 "digest": "sha512", 01:10:06.759 "dhgroup": "ffdhe4096" 01:10:06.759 } 01:10:06.759 } 01:10:06.759 ]' 01:10:06.759 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:10:06.759 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:10:06.759 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:10:06.759 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:10:06.759 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:10:07.016 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:10:07.016 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:10:07.016 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:10:07.016 11:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:03:MDZhYWJjYWJmZWM0Y2ZmOWViOTFlY2VhOGVkY2UzODg0MTk5NTc5ZTgwODUzYmI4MTY4MmQwYWYxZDUwMDVhZDpm5s4=: 01:10:07.582 11:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:10:07.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:10:07.839 11:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:10:07.839 11:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:07.839 11:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:07.839 11:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:07.839 11:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:10:07.839 11:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:10:07.839 11:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:10:07.839 11:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:10:07.839 11:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 01:10:07.839 11:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:10:07.839 11:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:10:07.839 11:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 01:10:07.839 11:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:10:07.839 11:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:10:07.839 11:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:10:07.839 11:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:07.839 11:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:08.096 11:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:08.096 11:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:10:08.096 11:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:10:08.354 01:10:08.354 11:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:10:08.354 11:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:10:08.354 11:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:10:08.612 11:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:10:08.612 11:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:10:08.612 11:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:08.612 11:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:08.612 11:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:08.612 11:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:10:08.612 { 01:10:08.612 "cntlid": 129, 01:10:08.612 "qid": 0, 01:10:08.612 "state": "enabled", 01:10:08.612 "thread": "nvmf_tgt_poll_group_000", 01:10:08.612 "listen_address": { 01:10:08.612 "trtype": "TCP", 01:10:08.612 "adrfam": "IPv4", 01:10:08.612 "traddr": "10.0.0.2", 01:10:08.612 "trsvcid": "4420" 01:10:08.612 }, 01:10:08.612 "peer_address": { 01:10:08.612 "trtype": "TCP", 01:10:08.612 "adrfam": "IPv4", 01:10:08.612 "traddr": "10.0.0.1", 01:10:08.612 "trsvcid": "51468" 01:10:08.612 }, 01:10:08.612 "auth": { 01:10:08.612 "state": "completed", 01:10:08.612 "digest": "sha512", 01:10:08.612 "dhgroup": "ffdhe6144" 01:10:08.612 } 01:10:08.612 } 01:10:08.612 ]' 01:10:08.612 11:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:10:08.612 11:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:10:08.612 11:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:10:08.612 11:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:10:08.612 11:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:10:08.870 11:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:10:08.870 11:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:10:08.870 11:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:10:09.128 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:00:YzEwNGEwNGZlMzFkMzY2M2Y5ZWM2OGQwMzcwNDRjNWYyYWVlODA4Y2ZkMzRjNGNljfoT4w==: --dhchap-ctrl-secret DHHC-1:03:N2ZkYjlhODkyYzA5MTgzMzVkMDZiODUwMWVlZDFjZDgxMGNhZGQzOTViMjdmYTYyOTU0ZGFjMjU5NDNiNDQ4YiUHx6Y=: 01:10:09.695 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:10:09.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:10:09.695 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:10:09.695 11:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:09.695 11:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:09.695 11:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:09.695 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:10:09.695 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:10:09.695 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:10:09.953 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 01:10:09.953 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:10:09.953 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:10:09.953 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 01:10:09.953 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:10:09.953 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:10:09.953 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:10:09.953 11:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:09.953 11:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:09.953 11:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:09.954 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:10:09.954 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:10:10.212 01:10:10.212 11:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:10:10.212 11:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:10:10.212 11:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:10:10.471 11:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:10:10.471 11:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:10:10.471 11:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:10.471 11:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:10.471 11:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:10.471 11:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:10:10.471 { 01:10:10.471 "cntlid": 131, 01:10:10.471 "qid": 0, 01:10:10.471 "state": "enabled", 01:10:10.471 "thread": "nvmf_tgt_poll_group_000", 01:10:10.471 "listen_address": { 01:10:10.471 "trtype": "TCP", 01:10:10.471 "adrfam": "IPv4", 01:10:10.471 "traddr": "10.0.0.2", 01:10:10.471 "trsvcid": "4420" 01:10:10.471 }, 01:10:10.471 "peer_address": { 01:10:10.471 "trtype": "TCP", 01:10:10.471 "adrfam": "IPv4", 01:10:10.471 "traddr": "10.0.0.1", 01:10:10.471 "trsvcid": "54538" 01:10:10.471 }, 01:10:10.471 "auth": { 01:10:10.471 "state": "completed", 01:10:10.471 "digest": "sha512", 01:10:10.471 "dhgroup": "ffdhe6144" 01:10:10.471 } 01:10:10.471 } 01:10:10.471 ]' 01:10:10.471 11:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:10:10.730 11:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:10:10.730 11:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:10:10.730 11:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:10:10.730 11:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:10:10.730 11:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:10:10.730 11:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:10:10.730 11:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:10:10.989 11:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:01:YjA1MTY1ZjA4NmFhNDkxZjdhNjRhNjg0N2EyNzFmZDUvk5EB: --dhchap-ctrl-secret DHHC-1:02:ZTAzN2M3OTQzYmFiNzkxOGI1MjhiMzlkZjNjNjliZjRhNTQyODY4MjBlODQxNjdi3CcgFw==: 01:10:11.555 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:10:11.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:10:11.555 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:10:11.555 11:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:11.555 11:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:11.555 11:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:11.555 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:10:11.555 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:10:11.555 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:10:11.813 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 01:10:11.813 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:10:11.813 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:10:11.813 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 01:10:11.813 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:10:11.813 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:10:11.813 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:10:11.813 11:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:11.813 11:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:11.813 11:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:11.813 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:10:11.813 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:10:12.071 01:10:12.330 11:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:10:12.330 11:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:10:12.330 11:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:10:12.330 11:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:10:12.330 11:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:10:12.330 11:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:12.330 11:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:12.330 11:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:12.330 11:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:10:12.330 { 01:10:12.330 "cntlid": 133, 01:10:12.330 "qid": 0, 01:10:12.330 "state": "enabled", 01:10:12.330 "thread": "nvmf_tgt_poll_group_000", 01:10:12.330 "listen_address": { 01:10:12.330 "trtype": "TCP", 01:10:12.330 "adrfam": "IPv4", 01:10:12.330 "traddr": "10.0.0.2", 01:10:12.330 "trsvcid": "4420" 01:10:12.330 }, 01:10:12.330 "peer_address": { 01:10:12.330 "trtype": "TCP", 01:10:12.330 "adrfam": "IPv4", 01:10:12.330 "traddr": "10.0.0.1", 01:10:12.330 "trsvcid": "54562" 01:10:12.330 }, 01:10:12.330 "auth": { 01:10:12.330 "state": "completed", 01:10:12.330 "digest": "sha512", 01:10:12.330 "dhgroup": "ffdhe6144" 01:10:12.330 } 01:10:12.330 } 01:10:12.330 ]' 01:10:12.330 11:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:10:12.588 11:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:10:12.588 11:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:10:12.588 11:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:10:12.588 11:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:10:12.588 11:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:10:12.588 11:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:10:12.588 11:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:10:12.846 11:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:02:YzJhNjFhNmZhZTU2NmFjOGU4MmU5MmRmMWUyZjUwZjkxOTE4YTY5OWZhN2Y4YzM3UDoDgg==: --dhchap-ctrl-secret DHHC-1:01:MGUyYWQzMDQ5OWQ1NzdmNTIzYzc4NjNlYWJiZDJmZWGEJNYt: 01:10:13.415 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:10:13.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:10:13.415 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:10:13.415 11:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:13.415 11:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:13.415 11:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:13.415 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:10:13.415 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:10:13.415 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:10:13.703 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 01:10:13.703 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:10:13.703 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:10:13.703 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 01:10:13.703 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:10:13.703 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:10:13.703 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key3 01:10:13.703 11:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:13.703 11:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:13.703 11:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:13.703 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:10:13.703 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:10:13.962 01:10:13.962 11:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:10:13.962 11:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:10:13.962 11:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:10:14.219 11:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:10:14.219 11:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:10:14.219 11:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:14.219 11:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:14.219 11:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:14.219 11:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:10:14.219 { 01:10:14.219 "cntlid": 135, 01:10:14.219 "qid": 0, 01:10:14.219 "state": "enabled", 01:10:14.219 "thread": "nvmf_tgt_poll_group_000", 01:10:14.219 "listen_address": { 01:10:14.219 "trtype": "TCP", 01:10:14.219 "adrfam": "IPv4", 01:10:14.219 "traddr": "10.0.0.2", 01:10:14.219 "trsvcid": "4420" 01:10:14.219 }, 01:10:14.219 "peer_address": { 01:10:14.219 "trtype": "TCP", 01:10:14.219 "adrfam": "IPv4", 01:10:14.219 "traddr": "10.0.0.1", 01:10:14.219 "trsvcid": "54594" 01:10:14.219 }, 01:10:14.219 "auth": { 01:10:14.219 "state": "completed", 01:10:14.219 "digest": "sha512", 01:10:14.219 "dhgroup": "ffdhe6144" 01:10:14.219 } 01:10:14.219 } 01:10:14.219 ]' 01:10:14.219 11:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:10:14.219 11:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:10:14.219 11:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:10:14.219 11:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:10:14.219 11:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:10:14.478 11:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:10:14.478 11:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:10:14.478 11:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:10:14.478 11:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:03:MDZhYWJjYWJmZWM0Y2ZmOWViOTFlY2VhOGVkY2UzODg0MTk5NTc5ZTgwODUzYmI4MTY4MmQwYWYxZDUwMDVhZDpm5s4=: 01:10:15.044 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:10:15.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:10:15.044 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:10:15.044 11:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:15.044 11:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:15.044 11:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:15.044 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:10:15.044 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:10:15.044 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:10:15.044 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:10:15.303 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 01:10:15.303 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:10:15.303 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:10:15.303 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 01:10:15.303 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:10:15.303 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:10:15.303 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:10:15.303 11:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:15.303 11:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:15.303 11:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:15.303 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:10:15.303 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:10:15.870 01:10:15.870 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:10:15.870 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:10:15.870 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:10:16.129 11:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:10:16.129 11:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:10:16.129 11:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:16.129 11:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:16.129 11:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:16.129 11:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:10:16.129 { 01:10:16.129 "cntlid": 137, 01:10:16.129 "qid": 0, 01:10:16.129 "state": "enabled", 01:10:16.129 "thread": "nvmf_tgt_poll_group_000", 01:10:16.129 "listen_address": { 01:10:16.129 "trtype": "TCP", 01:10:16.129 "adrfam": "IPv4", 01:10:16.129 "traddr": "10.0.0.2", 01:10:16.129 "trsvcid": "4420" 01:10:16.129 }, 01:10:16.129 "peer_address": { 01:10:16.129 "trtype": "TCP", 01:10:16.129 "adrfam": "IPv4", 01:10:16.129 "traddr": "10.0.0.1", 01:10:16.129 "trsvcid": "54610" 01:10:16.129 }, 01:10:16.129 "auth": { 01:10:16.129 "state": "completed", 01:10:16.129 "digest": "sha512", 01:10:16.129 "dhgroup": "ffdhe8192" 01:10:16.129 } 01:10:16.129 } 01:10:16.129 ]' 01:10:16.129 11:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:10:16.129 11:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:10:16.129 11:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:10:16.129 11:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:10:16.129 11:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:10:16.129 11:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:10:16.129 11:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:10:16.129 11:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:10:16.389 11:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:00:YzEwNGEwNGZlMzFkMzY2M2Y5ZWM2OGQwMzcwNDRjNWYyYWVlODA4Y2ZkMzRjNGNljfoT4w==: --dhchap-ctrl-secret DHHC-1:03:N2ZkYjlhODkyYzA5MTgzMzVkMDZiODUwMWVlZDFjZDgxMGNhZGQzOTViMjdmYTYyOTU0ZGFjMjU5NDNiNDQ4YiUHx6Y=: 01:10:16.958 11:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:10:16.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:10:16.958 11:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:10:16.958 11:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:16.958 11:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:16.958 11:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:16.958 11:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:10:16.958 11:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:10:16.958 11:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:10:17.216 11:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 01:10:17.216 11:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:10:17.216 11:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:10:17.216 11:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 01:10:17.216 11:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:10:17.216 11:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:10:17.216 11:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:10:17.216 11:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:17.216 11:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:17.216 11:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:17.216 11:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:10:17.216 11:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:10:17.782 01:10:17.782 11:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:10:17.782 11:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:10:17.782 11:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:10:18.040 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:10:18.040 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:10:18.040 11:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:18.040 11:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:18.040 11:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:18.040 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:10:18.040 { 01:10:18.040 "cntlid": 139, 01:10:18.040 "qid": 0, 01:10:18.040 "state": "enabled", 01:10:18.040 "thread": "nvmf_tgt_poll_group_000", 01:10:18.040 "listen_address": { 01:10:18.040 "trtype": "TCP", 01:10:18.040 "adrfam": "IPv4", 01:10:18.040 "traddr": "10.0.0.2", 01:10:18.040 "trsvcid": "4420" 01:10:18.040 }, 01:10:18.040 "peer_address": { 01:10:18.040 "trtype": "TCP", 01:10:18.040 "adrfam": "IPv4", 01:10:18.041 "traddr": "10.0.0.1", 01:10:18.041 "trsvcid": "54634" 01:10:18.041 }, 01:10:18.041 "auth": { 01:10:18.041 "state": "completed", 01:10:18.041 "digest": "sha512", 01:10:18.041 "dhgroup": "ffdhe8192" 01:10:18.041 } 01:10:18.041 } 01:10:18.041 ]' 01:10:18.041 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:10:18.041 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:10:18.041 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:10:18.041 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:10:18.041 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:10:18.041 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:10:18.041 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:10:18.041 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:10:18.298 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:01:YjA1MTY1ZjA4NmFhNDkxZjdhNjRhNjg0N2EyNzFmZDUvk5EB: --dhchap-ctrl-secret DHHC-1:02:ZTAzN2M3OTQzYmFiNzkxOGI1MjhiMzlkZjNjNjliZjRhNTQyODY4MjBlODQxNjdi3CcgFw==: 01:10:18.863 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:10:18.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:10:18.863 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:10:18.863 11:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:18.863 11:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:18.863 11:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:18.863 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:10:18.863 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:10:18.863 11:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:10:19.126 11:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 01:10:19.126 11:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:10:19.126 11:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:10:19.126 11:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 01:10:19.126 11:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:10:19.126 11:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:10:19.126 11:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:10:19.126 11:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:19.126 11:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:19.126 11:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:19.126 11:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:10:19.126 11:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:10:19.700 01:10:19.700 11:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:10:19.700 11:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:10:19.700 11:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:10:19.957 11:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:10:19.957 11:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:10:19.957 11:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:19.957 11:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:19.957 11:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:19.957 11:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:10:19.957 { 01:10:19.957 "cntlid": 141, 01:10:19.957 "qid": 0, 01:10:19.957 "state": "enabled", 01:10:19.957 "thread": "nvmf_tgt_poll_group_000", 01:10:19.957 "listen_address": { 01:10:19.957 "trtype": "TCP", 01:10:19.957 "adrfam": "IPv4", 01:10:19.957 "traddr": "10.0.0.2", 01:10:19.957 "trsvcid": "4420" 01:10:19.957 }, 01:10:19.957 "peer_address": { 01:10:19.957 "trtype": "TCP", 01:10:19.957 "adrfam": "IPv4", 01:10:19.957 "traddr": "10.0.0.1", 01:10:19.957 "trsvcid": "51822" 01:10:19.957 }, 01:10:19.957 "auth": { 01:10:19.957 "state": "completed", 01:10:19.957 "digest": "sha512", 01:10:19.957 "dhgroup": "ffdhe8192" 01:10:19.957 } 01:10:19.957 } 01:10:19.957 ]' 01:10:19.957 11:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:10:19.957 11:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:10:19.957 11:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:10:19.957 11:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:10:19.957 11:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:10:19.957 11:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:10:19.957 11:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:10:19.957 11:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:10:20.214 11:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:02:YzJhNjFhNmZhZTU2NmFjOGU4MmU5MmRmMWUyZjUwZjkxOTE4YTY5OWZhN2Y4YzM3UDoDgg==: --dhchap-ctrl-secret DHHC-1:01:MGUyYWQzMDQ5OWQ1NzdmNTIzYzc4NjNlYWJiZDJmZWGEJNYt: 01:10:20.780 11:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:10:20.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:10:20.780 11:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:10:20.780 11:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:20.780 11:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:20.780 11:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:20.780 11:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:10:20.780 11:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:10:20.780 11:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:10:21.037 11:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 01:10:21.037 11:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:10:21.037 11:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:10:21.037 11:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 01:10:21.037 11:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:10:21.037 11:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:10:21.037 11:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key3 01:10:21.037 11:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:21.037 11:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:21.037 11:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:21.037 11:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:10:21.037 11:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:10:21.610 01:10:21.610 11:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:10:21.610 11:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:10:21.610 11:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:10:21.868 11:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:10:21.868 11:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:10:21.868 11:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:21.868 11:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:21.868 11:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:21.868 11:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:10:21.868 { 01:10:21.868 "cntlid": 143, 01:10:21.868 "qid": 0, 01:10:21.868 "state": "enabled", 01:10:21.868 "thread": "nvmf_tgt_poll_group_000", 01:10:21.868 "listen_address": { 01:10:21.868 "trtype": "TCP", 01:10:21.868 "adrfam": "IPv4", 01:10:21.868 "traddr": "10.0.0.2", 01:10:21.868 "trsvcid": "4420" 01:10:21.868 }, 01:10:21.868 "peer_address": { 01:10:21.868 "trtype": "TCP", 01:10:21.868 "adrfam": "IPv4", 01:10:21.868 "traddr": "10.0.0.1", 01:10:21.868 "trsvcid": "51848" 01:10:21.868 }, 01:10:21.868 "auth": { 01:10:21.868 "state": "completed", 01:10:21.868 "digest": "sha512", 01:10:21.868 "dhgroup": "ffdhe8192" 01:10:21.868 } 01:10:21.868 } 01:10:21.868 ]' 01:10:21.868 11:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:10:21.868 11:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:10:21.868 11:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:10:21.868 11:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:10:21.868 11:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:10:21.868 11:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:10:21.868 11:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:10:21.868 11:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:10:22.125 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:03:MDZhYWJjYWJmZWM0Y2ZmOWViOTFlY2VhOGVkY2UzODg0MTk5NTc5ZTgwODUzYmI4MTY4MmQwYWYxZDUwMDVhZDpm5s4=: 01:10:22.691 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:10:22.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:10:22.691 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:10:22.691 11:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:22.691 11:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:22.691 11:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:22.691 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 01:10:22.691 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 01:10:22.691 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 01:10:22.691 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:10:22.691 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:10:22.691 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:10:22.950 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 01:10:22.950 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:10:22.950 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:10:22.950 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 01:10:22.950 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:10:22.950 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:10:22.950 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:10:22.950 11:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:22.950 11:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:22.950 11:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:22.950 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:10:22.950 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:10:23.517 01:10:23.517 11:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:10:23.517 11:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:10:23.517 11:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:10:23.775 11:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:10:23.775 11:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:10:23.775 11:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:23.775 11:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:23.775 11:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:23.775 11:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:10:23.775 { 01:10:23.775 "cntlid": 145, 01:10:23.775 "qid": 0, 01:10:23.775 "state": "enabled", 01:10:23.775 "thread": "nvmf_tgt_poll_group_000", 01:10:23.775 "listen_address": { 01:10:23.775 "trtype": "TCP", 01:10:23.775 "adrfam": "IPv4", 01:10:23.775 "traddr": "10.0.0.2", 01:10:23.775 "trsvcid": "4420" 01:10:23.775 }, 01:10:23.775 "peer_address": { 01:10:23.775 "trtype": "TCP", 01:10:23.775 "adrfam": "IPv4", 01:10:23.775 "traddr": "10.0.0.1", 01:10:23.775 "trsvcid": "51878" 01:10:23.775 }, 01:10:23.775 "auth": { 01:10:23.775 "state": "completed", 01:10:23.775 "digest": "sha512", 01:10:23.775 "dhgroup": "ffdhe8192" 01:10:23.775 } 01:10:23.775 } 01:10:23.775 ]' 01:10:23.775 11:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:10:23.775 11:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:10:23.775 11:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:10:23.775 11:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:10:23.775 11:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:10:23.775 11:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:10:23.775 11:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:10:23.775 11:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:10:24.033 11:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:00:YzEwNGEwNGZlMzFkMzY2M2Y5ZWM2OGQwMzcwNDRjNWYyYWVlODA4Y2ZkMzRjNGNljfoT4w==: --dhchap-ctrl-secret DHHC-1:03:N2ZkYjlhODkyYzA5MTgzMzVkMDZiODUwMWVlZDFjZDgxMGNhZGQzOTViMjdmYTYyOTU0ZGFjMjU5NDNiNDQ4YiUHx6Y=: 01:10:24.601 11:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:10:24.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:10:24.601 11:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:10:24.601 11:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:24.601 11:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:24.601 11:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:24.601 11:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key1 01:10:24.601 11:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:24.601 11:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:24.601 11:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:24.601 11:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 01:10:24.601 11:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 01:10:24.601 11:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 01:10:24.601 11:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 01:10:24.601 11:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:10:24.601 11:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 01:10:24.601 11:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:10:24.602 11:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 01:10:24.602 11:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 01:10:25.168 request: 01:10:25.168 { 01:10:25.168 "name": "nvme0", 01:10:25.168 "trtype": "tcp", 01:10:25.168 "traddr": "10.0.0.2", 01:10:25.168 "adrfam": "ipv4", 01:10:25.168 "trsvcid": "4420", 01:10:25.168 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:10:25.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb", 01:10:25.168 "prchk_reftag": false, 01:10:25.168 "prchk_guard": false, 01:10:25.168 "hdgst": false, 01:10:25.168 "ddgst": false, 01:10:25.168 "dhchap_key": "key2", 01:10:25.168 "method": "bdev_nvme_attach_controller", 01:10:25.168 "req_id": 1 01:10:25.168 } 01:10:25.168 Got JSON-RPC error response 01:10:25.168 response: 01:10:25.168 { 01:10:25.168 "code": -5, 01:10:25.168 "message": "Input/output error" 01:10:25.168 } 01:10:25.168 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 01:10:25.168 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:10:25.168 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:10:25.168 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:10:25.168 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:10:25.168 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:25.168 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:25.168 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:25.168 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:10:25.168 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:25.168 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:25.168 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:25.168 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:10:25.168 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 01:10:25.168 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:10:25.168 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 01:10:25.168 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:10:25.168 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 01:10:25.168 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:10:25.168 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:10:25.168 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:10:25.765 request: 01:10:25.765 { 01:10:25.765 "name": "nvme0", 01:10:25.765 "trtype": "tcp", 01:10:25.765 "traddr": "10.0.0.2", 01:10:25.765 "adrfam": "ipv4", 01:10:25.765 "trsvcid": "4420", 01:10:25.765 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:10:25.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb", 01:10:25.765 "prchk_reftag": false, 01:10:25.765 "prchk_guard": false, 01:10:25.765 "hdgst": false, 01:10:25.765 "ddgst": false, 01:10:25.765 "dhchap_key": "key1", 01:10:25.765 "dhchap_ctrlr_key": "ckey2", 01:10:25.765 "method": "bdev_nvme_attach_controller", 01:10:25.765 "req_id": 1 01:10:25.765 } 01:10:25.765 Got JSON-RPC error response 01:10:25.765 response: 01:10:25.765 { 01:10:25.765 "code": -5, 01:10:25.765 "message": "Input/output error" 01:10:25.765 } 01:10:25.765 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 01:10:25.765 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:10:25.765 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:10:25.765 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:10:25.765 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:10:25.765 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:25.765 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:25.765 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:25.765 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key1 01:10:25.765 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:25.765 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:25.765 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:25.765 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:10:25.765 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 01:10:25.765 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:10:25.765 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 01:10:25.765 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:10:25.765 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 01:10:25.765 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:10:25.765 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:10:25.765 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:10:26.328 request: 01:10:26.328 { 01:10:26.328 "name": "nvme0", 01:10:26.328 "trtype": "tcp", 01:10:26.328 "traddr": "10.0.0.2", 01:10:26.328 "adrfam": "ipv4", 01:10:26.328 "trsvcid": "4420", 01:10:26.328 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:10:26.328 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb", 01:10:26.328 "prchk_reftag": false, 01:10:26.328 "prchk_guard": false, 01:10:26.328 "hdgst": false, 01:10:26.328 "ddgst": false, 01:10:26.328 "dhchap_key": "key1", 01:10:26.328 "dhchap_ctrlr_key": "ckey1", 01:10:26.328 "method": "bdev_nvme_attach_controller", 01:10:26.328 "req_id": 1 01:10:26.328 } 01:10:26.328 Got JSON-RPC error response 01:10:26.328 response: 01:10:26.328 { 01:10:26.328 "code": -5, 01:10:26.328 "message": "Input/output error" 01:10:26.328 } 01:10:26.328 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 01:10:26.328 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:10:26.328 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:10:26.328 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:10:26.328 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:10:26.328 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:26.328 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:26.328 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:26.328 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 81584 01:10:26.328 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 81584 ']' 01:10:26.328 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 81584 01:10:26.328 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 01:10:26.328 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:10:26.328 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81584 01:10:26.328 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:10:26.328 killing process with pid 81584 01:10:26.328 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:10:26.328 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81584' 01:10:26.328 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 81584 01:10:26.328 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 81584 01:10:26.894 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 01:10:26.894 11:07:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:10:26.894 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 01:10:26.894 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:26.894 11:07:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=84346 01:10:26.894 11:07:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 01:10:26.894 11:07:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 84346 01:10:26.894 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 84346 ']' 01:10:26.894 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:10:26.894 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 01:10:26.894 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:10:26.894 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 01:10:26.894 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:27.825 11:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:10:27.825 11:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 01:10:27.825 11:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:10:27.825 11:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 01:10:27.825 11:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:27.825 11:07:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:10:27.825 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 01:10:27.825 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 84346 01:10:27.825 11:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 84346 ']' 01:10:27.825 11:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:10:27.825 11:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 01:10:27.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:10:27.825 11:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:10:27.826 11:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 01:10:27.826 11:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:27.826 11:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:10:27.826 11:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 01:10:27.826 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 01:10:27.826 11:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:27.826 11:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:28.083 11:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:28.083 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 01:10:28.083 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:10:28.083 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:10:28.083 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 01:10:28.083 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:10:28.083 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:10:28.083 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key3 01:10:28.083 11:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:28.083 11:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:28.083 11:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:28.083 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:10:28.083 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:10:28.648 01:10:28.648 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:10:28.648 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:10:28.648 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:10:28.906 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:10:28.906 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:10:28.906 11:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:28.906 11:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:28.906 11:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:28.906 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:10:28.906 { 01:10:28.906 "cntlid": 1, 01:10:28.906 "qid": 0, 01:10:28.906 "state": "enabled", 01:10:28.906 "thread": "nvmf_tgt_poll_group_000", 01:10:28.906 "listen_address": { 01:10:28.906 "trtype": "TCP", 01:10:28.906 "adrfam": "IPv4", 01:10:28.906 "traddr": "10.0.0.2", 01:10:28.906 "trsvcid": "4420" 01:10:28.906 }, 01:10:28.906 "peer_address": { 01:10:28.906 "trtype": "TCP", 01:10:28.906 "adrfam": "IPv4", 01:10:28.906 "traddr": "10.0.0.1", 01:10:28.906 "trsvcid": "51948" 01:10:28.906 }, 01:10:28.906 "auth": { 01:10:28.906 "state": "completed", 01:10:28.906 "digest": "sha512", 01:10:28.906 "dhgroup": "ffdhe8192" 01:10:28.906 } 01:10:28.906 } 01:10:28.906 ]' 01:10:28.906 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:10:28.906 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:10:28.906 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:10:28.906 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:10:28.906 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:10:29.162 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:10:29.162 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:10:29.162 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:10:29.420 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid 7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-secret DHHC-1:03:MDZhYWJjYWJmZWM0Y2ZmOWViOTFlY2VhOGVkY2UzODg0MTk5NTc5ZTgwODUzYmI4MTY4MmQwYWYxZDUwMDVhZDpm5s4=: 01:10:29.985 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:10:29.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:10:29.985 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:10:29.985 11:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:29.985 11:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:29.985 11:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:29.985 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --dhchap-key key3 01:10:29.985 11:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:29.985 11:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:29.985 11:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:29.985 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 01:10:29.985 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 01:10:30.244 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:10:30.244 11:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 01:10:30.244 11:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:10:30.244 11:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 01:10:30.244 11:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:10:30.244 11:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 01:10:30.244 11:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:10:30.244 11:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:10:30.244 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:10:30.502 request: 01:10:30.502 { 01:10:30.502 "name": "nvme0", 01:10:30.502 "trtype": "tcp", 01:10:30.502 "traddr": "10.0.0.2", 01:10:30.502 "adrfam": "ipv4", 01:10:30.502 "trsvcid": "4420", 01:10:30.502 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:10:30.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb", 01:10:30.502 "prchk_reftag": false, 01:10:30.502 "prchk_guard": false, 01:10:30.502 "hdgst": false, 01:10:30.502 "ddgst": false, 01:10:30.502 "dhchap_key": "key3", 01:10:30.502 "method": "bdev_nvme_attach_controller", 01:10:30.502 "req_id": 1 01:10:30.502 } 01:10:30.502 Got JSON-RPC error response 01:10:30.502 response: 01:10:30.502 { 01:10:30.502 "code": -5, 01:10:30.502 "message": "Input/output error" 01:10:30.502 } 01:10:30.502 11:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 01:10:30.502 11:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:10:30.502 11:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:10:30.502 11:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:10:30.502 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 01:10:30.502 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 01:10:30.502 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 01:10:30.503 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 01:10:30.761 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:10:30.761 11:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 01:10:30.761 11:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:10:30.761 11:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 01:10:30.761 11:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:10:30.761 11:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 01:10:30.761 11:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:10:30.761 11:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:10:30.761 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:10:31.021 request: 01:10:31.021 { 01:10:31.021 "name": "nvme0", 01:10:31.021 "trtype": "tcp", 01:10:31.021 "traddr": "10.0.0.2", 01:10:31.021 "adrfam": "ipv4", 01:10:31.021 "trsvcid": "4420", 01:10:31.021 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:10:31.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb", 01:10:31.021 "prchk_reftag": false, 01:10:31.021 "prchk_guard": false, 01:10:31.021 "hdgst": false, 01:10:31.021 "ddgst": false, 01:10:31.021 "dhchap_key": "key3", 01:10:31.021 "method": "bdev_nvme_attach_controller", 01:10:31.021 "req_id": 1 01:10:31.021 } 01:10:31.021 Got JSON-RPC error response 01:10:31.021 response: 01:10:31.021 { 01:10:31.021 "code": -5, 01:10:31.021 "message": "Input/output error" 01:10:31.021 } 01:10:31.021 11:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 01:10:31.021 11:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:10:31.021 11:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:10:31.021 11:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:10:31.021 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 01:10:31.021 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 01:10:31.021 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 01:10:31.021 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:10:31.021 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:10:31.021 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:10:31.280 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:10:31.280 11:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:31.280 11:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:31.280 11:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:31.280 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:10:31.280 11:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:31.280 11:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:31.280 11:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:31.280 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 01:10:31.280 11:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 01:10:31.280 11:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 01:10:31.280 11:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 01:10:31.280 11:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:10:31.280 11:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 01:10:31.280 11:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:10:31.280 11:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 01:10:31.280 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 01:10:31.539 request: 01:10:31.539 { 01:10:31.539 "name": "nvme0", 01:10:31.539 "trtype": "tcp", 01:10:31.539 "traddr": "10.0.0.2", 01:10:31.539 "adrfam": "ipv4", 01:10:31.539 "trsvcid": "4420", 01:10:31.539 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:10:31.539 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb", 01:10:31.539 "prchk_reftag": false, 01:10:31.539 "prchk_guard": false, 01:10:31.539 "hdgst": false, 01:10:31.539 "ddgst": false, 01:10:31.539 "dhchap_key": "key0", 01:10:31.539 "dhchap_ctrlr_key": "key1", 01:10:31.539 "method": "bdev_nvme_attach_controller", 01:10:31.539 "req_id": 1 01:10:31.539 } 01:10:31.539 Got JSON-RPC error response 01:10:31.539 response: 01:10:31.539 { 01:10:31.539 "code": -5, 01:10:31.539 "message": "Input/output error" 01:10:31.539 } 01:10:31.539 11:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 01:10:31.539 11:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:10:31.539 11:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:10:31.539 11:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:10:31.539 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 01:10:31.539 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 01:10:31.796 01:10:31.796 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 01:10:31.796 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:10:31.796 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 01:10:32.054 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:10:32.054 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 01:10:32.054 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:10:32.313 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 01:10:32.313 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 01:10:32.313 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 81616 01:10:32.313 11:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 81616 ']' 01:10:32.313 11:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 81616 01:10:32.313 11:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 01:10:32.313 11:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:10:32.313 11:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81616 01:10:32.313 killing process with pid 81616 01:10:32.313 11:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:10:32.313 11:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:10:32.313 11:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81616' 01:10:32.313 11:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 81616 01:10:32.313 11:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 81616 01:10:32.879 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 01:10:32.879 11:07:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 01:10:32.879 11:07:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 01:10:32.879 11:07:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:10:32.879 11:07:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 01:10:32.879 11:07:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 01:10:32.879 11:07:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:10:32.879 rmmod nvme_tcp 01:10:32.879 rmmod nvme_fabrics 01:10:32.879 rmmod nvme_keyring 01:10:33.138 11:07:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:10:33.138 11:07:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 01:10:33.138 11:07:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 01:10:33.138 11:07:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 84346 ']' 01:10:33.138 11:07:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 84346 01:10:33.138 11:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 84346 ']' 01:10:33.138 11:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 84346 01:10:33.138 11:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 01:10:33.138 11:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:10:33.138 11:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84346 01:10:33.138 killing process with pid 84346 01:10:33.138 11:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:10:33.138 11:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:10:33.138 11:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84346' 01:10:33.138 11:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 84346 01:10:33.138 11:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 84346 01:10:33.397 11:07:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:10:33.397 11:07:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:10:33.397 11:07:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:10:33.397 11:07:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:10:33.397 11:07:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 01:10:33.397 11:07:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:10:33.397 11:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:10:33.397 11:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:10:33.397 11:07:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:10:33.397 11:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.FdH /tmp/spdk.key-sha256.M4n /tmp/spdk.key-sha384.QZH /tmp/spdk.key-sha512.RPW /tmp/spdk.key-sha512.O36 /tmp/spdk.key-sha384.yWm /tmp/spdk.key-sha256.x3D '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 01:10:33.397 01:10:33.397 real 2m22.876s 01:10:33.397 user 5m29.539s 01:10:33.397 sys 0m29.921s 01:10:33.397 ************************************ 01:10:33.397 END TEST nvmf_auth_target 01:10:33.397 ************************************ 01:10:33.397 11:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 01:10:33.397 11:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:10:33.397 11:07:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:10:33.397 11:07:38 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 01:10:33.397 11:07:38 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 01:10:33.397 11:07:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 01:10:33.397 11:07:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:10:33.656 11:07:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:10:33.656 ************************************ 01:10:33.656 START TEST nvmf_bdevio_no_huge 01:10:33.656 ************************************ 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 01:10:33.656 * Looking for test storage... 01:10:33.656 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 01:10:33.656 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:10:33.657 Cannot find device "nvmf_tgt_br" 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:10:33.657 Cannot find device "nvmf_tgt_br2" 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:10:33.657 Cannot find device "nvmf_tgt_br" 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:10:33.657 Cannot find device "nvmf_tgt_br2" 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 01:10:33.657 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:10:33.916 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:10:33.916 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:10:33.916 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:10:33.916 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 01:10:33.916 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:10:33.916 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:10:33.916 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 01:10:33.916 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:10:33.916 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:10:33.916 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:10:33.916 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:10:33.916 11:07:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:10:33.916 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:10:33.916 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:10:33.916 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:10:33.916 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:10:33.916 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:10:33.916 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:10:33.916 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:10:33.916 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:10:33.916 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:10:33.916 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:10:33.916 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:10:33.916 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:10:33.916 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:10:33.916 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:10:33.916 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:10:34.175 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:10:34.175 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:10:34.175 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:10:34.175 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:10:34.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:10:34.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 01:10:34.175 01:10:34.175 --- 10.0.0.2 ping statistics --- 01:10:34.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:34.175 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 01:10:34.175 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:10:34.175 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:10:34.175 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 01:10:34.175 01:10:34.175 --- 10.0.0.3 ping statistics --- 01:10:34.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:34.175 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 01:10:34.175 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:10:34.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:10:34.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 01:10:34.175 01:10:34.175 --- 10.0.0.1 ping statistics --- 01:10:34.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:34.175 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 01:10:34.175 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:10:34.175 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 01:10:34.175 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:10:34.175 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:10:34.175 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:10:34.175 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:10:34.175 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:10:34.175 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:10:34.175 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:10:34.175 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 01:10:34.175 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:10:34.175 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 01:10:34.175 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:10:34.175 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=84663 01:10:34.175 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 01:10:34.175 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 84663 01:10:34.175 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 84663 ']' 01:10:34.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:10:34.175 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:10:34.175 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 01:10:34.175 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:10:34.175 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 01:10:34.175 11:07:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:10:34.175 [2024-07-22 11:07:39.270570] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:10:34.175 [2024-07-22 11:07:39.270885] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 01:10:34.434 [2024-07-22 11:07:39.416233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:10:34.434 [2024-07-22 11:07:39.528557] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:10:34.434 [2024-07-22 11:07:39.528887] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:10:34.434 [2024-07-22 11:07:39.529285] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:10:34.434 [2024-07-22 11:07:39.529454] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:10:34.434 [2024-07-22 11:07:39.529727] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:10:34.434 [2024-07-22 11:07:39.530035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 01:10:34.434 [2024-07-22 11:07:39.530171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 01:10:34.434 [2024-07-22 11:07:39.530577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:10:34.434 [2024-07-22 11:07:39.530580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 01:10:34.434 [2024-07-22 11:07:39.534984] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:10:35.367 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:10:35.367 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:10:35.368 [2024-07-22 11:07:40.276997] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:10:35.368 Malloc0 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:10:35.368 [2024-07-22 11:07:40.329225] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:10:35.368 { 01:10:35.368 "params": { 01:10:35.368 "name": "Nvme$subsystem", 01:10:35.368 "trtype": "$TEST_TRANSPORT", 01:10:35.368 "traddr": "$NVMF_FIRST_TARGET_IP", 01:10:35.368 "adrfam": "ipv4", 01:10:35.368 "trsvcid": "$NVMF_PORT", 01:10:35.368 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:10:35.368 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:10:35.368 "hdgst": ${hdgst:-false}, 01:10:35.368 "ddgst": ${ddgst:-false} 01:10:35.368 }, 01:10:35.368 "method": "bdev_nvme_attach_controller" 01:10:35.368 } 01:10:35.368 EOF 01:10:35.368 )") 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 01:10:35.368 11:07:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:10:35.368 "params": { 01:10:35.368 "name": "Nvme1", 01:10:35.368 "trtype": "tcp", 01:10:35.368 "traddr": "10.0.0.2", 01:10:35.368 "adrfam": "ipv4", 01:10:35.368 "trsvcid": "4420", 01:10:35.368 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:10:35.368 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:10:35.368 "hdgst": false, 01:10:35.368 "ddgst": false 01:10:35.368 }, 01:10:35.368 "method": "bdev_nvme_attach_controller" 01:10:35.368 }' 01:10:35.368 [2024-07-22 11:07:40.390271] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:10:35.368 [2024-07-22 11:07:40.390406] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid84699 ] 01:10:35.368 [2024-07-22 11:07:40.542298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 01:10:35.624 [2024-07-22 11:07:40.689328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:10:35.624 [2024-07-22 11:07:40.689537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:10:35.624 [2024-07-22 11:07:40.689670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:10:35.624 [2024-07-22 11:07:40.702707] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:10:35.880 I/O targets: 01:10:35.880 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 01:10:35.880 01:10:35.880 01:10:35.880 CUnit - A unit testing framework for C - Version 2.1-3 01:10:35.880 http://cunit.sourceforge.net/ 01:10:35.880 01:10:35.880 01:10:35.880 Suite: bdevio tests on: Nvme1n1 01:10:35.880 Test: blockdev write read block ...passed 01:10:35.880 Test: blockdev write zeroes read block ...passed 01:10:35.880 Test: blockdev write zeroes read no split ...passed 01:10:35.880 Test: blockdev write zeroes read split ...passed 01:10:35.880 Test: blockdev write zeroes read split partial ...passed 01:10:35.880 Test: blockdev reset ...[2024-07-22 11:07:40.913799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:10:35.880 [2024-07-22 11:07:40.914068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17cf0f0 (9): Bad file descriptor 01:10:35.880 [2024-07-22 11:07:40.928269] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 01:10:35.880 passed 01:10:35.880 Test: blockdev write read 8 blocks ...passed 01:10:35.880 Test: blockdev write read size > 128k ...passed 01:10:35.880 Test: blockdev write read invalid size ...passed 01:10:35.880 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:10:35.880 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:10:35.880 Test: blockdev write read max offset ...passed 01:10:35.880 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:10:35.880 Test: blockdev writev readv 8 blocks ...passed 01:10:35.880 Test: blockdev writev readv 30 x 1block ...passed 01:10:35.880 Test: blockdev writev readv block ...passed 01:10:35.880 Test: blockdev writev readv size > 128k ...passed 01:10:35.880 Test: blockdev writev readv size > 128k in two iovs ...passed 01:10:35.880 Test: blockdev comparev and writev ...[2024-07-22 11:07:40.935508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:10:35.880 [2024-07-22 11:07:40.935561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:10:35.880 [2024-07-22 11:07:40.935580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:10:35.880 [2024-07-22 11:07:40.935590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:10:35.880 [2024-07-22 11:07:40.935901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:10:35.880 [2024-07-22 11:07:40.935924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:10:35.880 [2024-07-22 11:07:40.935941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:10:35.880 [2024-07-22 11:07:40.935951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:10:35.880 [2024-07-22 11:07:40.936262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:10:35.880 [2024-07-22 11:07:40.936286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:10:35.880 [2024-07-22 11:07:40.936301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:10:35.880 [2024-07-22 11:07:40.936311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:10:35.881 [2024-07-22 11:07:40.936610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:10:35.881 [2024-07-22 11:07:40.936634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:10:35.881 [2024-07-22 11:07:40.936649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:10:35.881 [2024-07-22 11:07:40.936659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:10:35.881 passed 01:10:35.881 Test: blockdev nvme passthru rw ...passed 01:10:35.881 Test: blockdev nvme passthru vendor specific ...[2024-07-22 11:07:40.937327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:10:35.881 [2024-07-22 11:07:40.937356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:10:35.881 [2024-07-22 11:07:40.937438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:10:35.881 [2024-07-22 11:07:40.937453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:10:35.881 [2024-07-22 11:07:40.937529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:10:35.881 [2024-07-22 11:07:40.937548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:10:35.881 [2024-07-22 11:07:40.937637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:10:35.881 [2024-07-22 11:07:40.937657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:10:35.881 passed 01:10:35.881 Test: blockdev nvme admin passthru ...passed 01:10:35.881 Test: blockdev copy ...passed 01:10:35.881 01:10:35.881 Run Summary: Type Total Ran Passed Failed Inactive 01:10:35.881 suites 1 1 n/a 0 0 01:10:35.881 tests 23 23 23 0 0 01:10:35.881 asserts 152 152 152 0 n/a 01:10:35.881 01:10:35.881 Elapsed time = 0.171 seconds 01:10:36.138 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:10:36.138 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:36.138 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:10:36.138 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:36.138 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 01:10:36.138 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 01:10:36.138 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 01:10:36.138 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 01:10:36.395 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:10:36.395 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 01:10:36.395 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 01:10:36.395 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:10:36.395 rmmod nvme_tcp 01:10:36.395 rmmod nvme_fabrics 01:10:36.395 rmmod nvme_keyring 01:10:36.395 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:10:36.395 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 01:10:36.395 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 01:10:36.395 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 84663 ']' 01:10:36.395 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 84663 01:10:36.395 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 84663 ']' 01:10:36.395 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 84663 01:10:36.395 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 01:10:36.395 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:10:36.395 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84663 01:10:36.395 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 01:10:36.395 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 01:10:36.395 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84663' 01:10:36.395 killing process with pid 84663 01:10:36.395 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 84663 01:10:36.395 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 84663 01:10:36.653 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:10:36.653 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:10:36.653 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:10:36.653 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:10:36.653 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 01:10:36.653 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:10:36.653 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:10:36.653 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:10:36.911 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:10:36.911 01:10:36.911 real 0m3.273s 01:10:36.911 user 0m10.250s 01:10:36.911 sys 0m1.483s 01:10:36.911 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 01:10:36.911 11:07:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:10:36.911 ************************************ 01:10:36.911 END TEST nvmf_bdevio_no_huge 01:10:36.911 ************************************ 01:10:36.911 11:07:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:10:36.911 11:07:41 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 01:10:36.911 11:07:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:10:36.911 11:07:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:10:36.911 11:07:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:10:36.911 ************************************ 01:10:36.911 START TEST nvmf_tls 01:10:36.911 ************************************ 01:10:36.911 11:07:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 01:10:36.911 * Looking for test storage... 01:10:36.911 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:10:36.911 11:07:42 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:10:36.911 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 01:10:36.911 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:10:36.911 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:10:36.911 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:10:36.911 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:10:36.911 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:10:36.911 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:10:36.911 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:10:36.911 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:10:36.911 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:10:36.911 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:10:36.911 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:10:36.911 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:10:36.911 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:10:36.911 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:10:36.911 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:10:36.911 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:10:36.911 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:10:37.175 Cannot find device "nvmf_tgt_br" 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:10:37.175 Cannot find device "nvmf_tgt_br2" 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:10:37.175 Cannot find device "nvmf_tgt_br" 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:10:37.175 Cannot find device "nvmf_tgt_br2" 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:10:37.175 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:10:37.175 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:10:37.175 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:10:37.445 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:10:37.445 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:10:37.445 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:10:37.445 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:10:37.445 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:10:37.445 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:10:37.445 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:10:37.445 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:10:37.445 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:10:37.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:10:37.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 01:10:37.446 01:10:37.446 --- 10.0.0.2 ping statistics --- 01:10:37.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:37.446 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:10:37.446 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:10:37.446 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 01:10:37.446 01:10:37.446 --- 10.0.0.3 ping statistics --- 01:10:37.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:37.446 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:10:37.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:10:37.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 01:10:37.446 01:10:37.446 --- 10.0.0.1 ping statistics --- 01:10:37.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:37.446 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84876 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84876 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84876 ']' 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:10:37.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:10:37.446 11:07:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:10:37.446 [2024-07-22 11:07:42.601395] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:10:37.446 [2024-07-22 11:07:42.601476] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:10:37.703 [2024-07-22 11:07:42.748391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:10:37.703 [2024-07-22 11:07:42.795657] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:10:37.703 [2024-07-22 11:07:42.795702] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:10:37.703 [2024-07-22 11:07:42.795711] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:10:37.703 [2024-07-22 11:07:42.795720] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:10:37.703 [2024-07-22 11:07:42.795726] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:10:37.703 [2024-07-22 11:07:42.795751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:10:38.639 11:07:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:10:38.639 11:07:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:10:38.639 11:07:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:10:38.640 11:07:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 01:10:38.640 11:07:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:10:38.640 11:07:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:10:38.640 11:07:43 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 01:10:38.640 11:07:43 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 01:10:38.640 true 01:10:38.640 11:07:43 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:10:38.640 11:07:43 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 01:10:38.898 11:07:43 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 01:10:38.898 11:07:43 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 01:10:38.898 11:07:43 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 01:10:39.156 11:07:44 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:10:39.156 11:07:44 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 01:10:39.156 11:07:44 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 01:10:39.156 11:07:44 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 01:10:39.156 11:07:44 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 01:10:39.414 11:07:44 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:10:39.414 11:07:44 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 01:10:39.672 11:07:44 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 01:10:39.672 11:07:44 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 01:10:39.672 11:07:44 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:10:39.672 11:07:44 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 01:10:39.931 11:07:44 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 01:10:39.931 11:07:44 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 01:10:39.931 11:07:44 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 01:10:39.931 11:07:45 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:10:39.931 11:07:45 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 01:10:40.191 11:07:45 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 01:10:40.191 11:07:45 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 01:10:40.191 11:07:45 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 01:10:40.450 11:07:45 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:10:40.450 11:07:45 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 01:10:40.709 11:07:45 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 01:10:40.709 11:07:45 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 01:10:40.709 11:07:45 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 01:10:40.709 11:07:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 01:10:40.709 11:07:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 01:10:40.709 11:07:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 01:10:40.709 11:07:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 01:10:40.709 11:07:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 01:10:40.709 11:07:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 01:10:40.709 11:07:45 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 01:10:40.709 11:07:45 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 01:10:40.709 11:07:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 01:10:40.709 11:07:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 01:10:40.709 11:07:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 01:10:40.709 11:07:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 01:10:40.709 11:07:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 01:10:40.709 11:07:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 01:10:40.709 11:07:45 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 01:10:40.709 11:07:45 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 01:10:40.709 11:07:45 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.qYijhtpULj 01:10:40.709 11:07:45 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 01:10:40.709 11:07:45 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.GVFvE7DAfW 01:10:40.709 11:07:45 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 01:10:40.709 11:07:45 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 01:10:40.709 11:07:45 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.qYijhtpULj 01:10:40.709 11:07:45 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.GVFvE7DAfW 01:10:40.709 11:07:45 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 01:10:40.968 11:07:46 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 01:10:41.227 [2024-07-22 11:07:46.292659] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:10:41.227 11:07:46 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.qYijhtpULj 01:10:41.227 11:07:46 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.qYijhtpULj 01:10:41.227 11:07:46 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:10:41.486 [2024-07-22 11:07:46.514214] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:10:41.486 11:07:46 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:10:41.745 11:07:46 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 01:10:41.745 [2024-07-22 11:07:46.901604] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:10:41.745 [2024-07-22 11:07:46.901925] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:10:41.745 11:07:46 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:10:42.003 malloc0 01:10:42.003 11:07:47 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:10:42.261 11:07:47 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qYijhtpULj 01:10:42.520 [2024-07-22 11:07:47.571999] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 01:10:42.520 11:07:47 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.qYijhtpULj 01:10:54.722 Initializing NVMe Controllers 01:10:54.722 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:10:54.722 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:10:54.722 Initialization complete. Launching workers. 01:10:54.722 ======================================================== 01:10:54.722 Latency(us) 01:10:54.722 Device Information : IOPS MiB/s Average min max 01:10:54.722 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12179.70 47.58 5255.63 911.47 16287.07 01:10:54.722 ======================================================== 01:10:54.722 Total : 12179.70 47.58 5255.63 911.47 16287.07 01:10:54.722 01:10:54.722 11:07:57 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qYijhtpULj 01:10:54.722 11:07:57 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:10:54.722 11:07:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:10:54.722 11:07:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:10:54.722 11:07:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.qYijhtpULj' 01:10:54.722 11:07:57 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:10:54.722 11:07:57 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85103 01:10:54.722 11:07:57 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:10:54.722 11:07:57 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:10:54.722 11:07:57 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85103 /var/tmp/bdevperf.sock 01:10:54.722 11:07:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85103 ']' 01:10:54.722 11:07:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:10:54.722 11:07:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:10:54.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:10:54.722 11:07:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:10:54.722 11:07:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:10:54.722 11:07:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:10:54.722 [2024-07-22 11:07:57.819124] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:10:54.722 [2024-07-22 11:07:57.819185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85103 ] 01:10:54.722 [2024-07-22 11:07:57.948405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:10:54.722 [2024-07-22 11:07:57.992108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:10:54.722 [2024-07-22 11:07:58.032963] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:10:54.722 11:07:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:10:54.722 11:07:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:10:54.722 11:07:58 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qYijhtpULj 01:10:54.722 [2024-07-22 11:07:58.808388] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:10:54.722 [2024-07-22 11:07:58.808480] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 01:10:54.722 TLSTESTn1 01:10:54.722 11:07:58 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 01:10:54.722 Running I/O for 10 seconds... 01:11:04.766 01:11:04.766 Latency(us) 01:11:04.766 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:04.766 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:11:04.766 Verification LBA range: start 0x0 length 0x2000 01:11:04.766 TLSTESTn1 : 10.01 5683.75 22.20 0.00 0.00 22485.90 4632.26 20634.63 01:11:04.766 =================================================================================================================== 01:11:04.766 Total : 5683.75 22.20 0.00 0.00 22485.90 4632.26 20634.63 01:11:04.766 0 01:11:04.766 11:08:08 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:11:04.766 11:08:08 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 85103 01:11:04.766 11:08:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85103 ']' 01:11:04.766 11:08:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85103 01:11:04.766 11:08:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:11:04.766 11:08:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:11:04.766 11:08:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85103 01:11:04.766 11:08:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:11:04.766 11:08:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:11:04.766 killing process with pid 85103 01:11:04.766 11:08:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85103' 01:11:04.766 Received shutdown signal, test time was about 10.000000 seconds 01:11:04.766 01:11:04.766 Latency(us) 01:11:04.766 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:04.766 =================================================================================================================== 01:11:04.766 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:11:04.766 11:08:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85103 01:11:04.766 [2024-07-22 11:08:09.028670] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 01:11:04.766 11:08:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85103 01:11:04.766 11:08:09 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GVFvE7DAfW 01:11:04.766 11:08:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 01:11:04.766 11:08:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GVFvE7DAfW 01:11:04.766 11:08:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 01:11:04.766 11:08:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:11:04.766 11:08:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 01:11:04.766 11:08:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:11:04.766 11:08:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GVFvE7DAfW 01:11:04.766 11:08:09 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:11:04.766 11:08:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:11:04.766 11:08:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:11:04.766 11:08:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.GVFvE7DAfW' 01:11:04.766 11:08:09 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:11:04.766 11:08:09 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85236 01:11:04.766 11:08:09 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:11:04.766 11:08:09 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:11:04.766 11:08:09 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85236 /var/tmp/bdevperf.sock 01:11:04.766 11:08:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85236 ']' 01:11:04.766 11:08:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:11:04.766 11:08:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:11:04.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:11:04.767 11:08:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:11:04.767 11:08:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:11:04.767 11:08:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:04.767 [2024-07-22 11:08:09.258961] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:11:04.767 [2024-07-22 11:08:09.259021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85236 ] 01:11:04.767 [2024-07-22 11:08:09.401308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:04.767 [2024-07-22 11:08:09.444087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:11:04.767 [2024-07-22 11:08:09.484865] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:11:05.026 11:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:11:05.026 11:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:11:05.026 11:08:10 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GVFvE7DAfW 01:11:05.286 [2024-07-22 11:08:10.363873] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:11:05.286 [2024-07-22 11:08:10.363966] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 01:11:05.286 [2024-07-22 11:08:10.375255] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:11:05.286 [2024-07-22 11:08:10.376036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2433010 (107): Transport endpoint is not connected 01:11:05.286 [2024-07-22 11:08:10.377024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2433010 (9): Bad file descriptor 01:11:05.286 [2024-07-22 11:08:10.378021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:11:05.286 [2024-07-22 11:08:10.378042] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 01:11:05.286 [2024-07-22 11:08:10.378054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:11:05.286 request: 01:11:05.286 { 01:11:05.286 "name": "TLSTEST", 01:11:05.286 "trtype": "tcp", 01:11:05.286 "traddr": "10.0.0.2", 01:11:05.286 "adrfam": "ipv4", 01:11:05.286 "trsvcid": "4420", 01:11:05.286 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:11:05.286 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:11:05.286 "prchk_reftag": false, 01:11:05.286 "prchk_guard": false, 01:11:05.286 "hdgst": false, 01:11:05.286 "ddgst": false, 01:11:05.286 "psk": "/tmp/tmp.GVFvE7DAfW", 01:11:05.286 "method": "bdev_nvme_attach_controller", 01:11:05.286 "req_id": 1 01:11:05.286 } 01:11:05.286 Got JSON-RPC error response 01:11:05.286 response: 01:11:05.286 { 01:11:05.286 "code": -5, 01:11:05.286 "message": "Input/output error" 01:11:05.286 } 01:11:05.286 11:08:10 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 85236 01:11:05.286 11:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85236 ']' 01:11:05.286 11:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85236 01:11:05.286 11:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:11:05.286 11:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:11:05.286 11:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85236 01:11:05.286 killing process with pid 85236 01:11:05.286 Received shutdown signal, test time was about 10.000000 seconds 01:11:05.286 01:11:05.286 Latency(us) 01:11:05.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:05.286 =================================================================================================================== 01:11:05.286 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:11:05.286 11:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:11:05.286 11:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:11:05.286 11:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85236' 01:11:05.286 11:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85236 01:11:05.286 [2024-07-22 11:08:10.427129] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 01:11:05.286 11:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85236 01:11:05.546 11:08:10 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 01:11:05.546 11:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 01:11:05.546 11:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:11:05.546 11:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:11:05.546 11:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:11:05.546 11:08:10 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qYijhtpULj 01:11:05.546 11:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 01:11:05.546 11:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qYijhtpULj 01:11:05.546 11:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 01:11:05.546 11:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:11:05.546 11:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 01:11:05.546 11:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:11:05.546 11:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qYijhtpULj 01:11:05.546 11:08:10 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:11:05.546 11:08:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:11:05.546 11:08:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 01:11:05.546 11:08:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.qYijhtpULj' 01:11:05.546 11:08:10 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:11:05.546 11:08:10 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85264 01:11:05.546 11:08:10 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:11:05.546 11:08:10 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:11:05.546 11:08:10 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85264 /var/tmp/bdevperf.sock 01:11:05.546 11:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85264 ']' 01:11:05.546 11:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:11:05.546 11:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:11:05.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:11:05.546 11:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:11:05.546 11:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:11:05.546 11:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:05.546 [2024-07-22 11:08:10.654907] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:11:05.546 [2024-07-22 11:08:10.654973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85264 ] 01:11:05.805 [2024-07-22 11:08:10.784360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:05.805 [2024-07-22 11:08:10.824826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:11:05.805 [2024-07-22 11:08:10.865665] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:11:06.373 11:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:11:06.373 11:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:11:06.373 11:08:11 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.qYijhtpULj 01:11:06.632 [2024-07-22 11:08:11.648826] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:11:06.632 [2024-07-22 11:08:11.648924] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 01:11:06.632 [2024-07-22 11:08:11.653216] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 01:11:06.632 [2024-07-22 11:08:11.653247] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 01:11:06.632 [2024-07-22 11:08:11.653290] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:11:06.632 [2024-07-22 11:08:11.653987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc5010 (107): Transport endpoint is not connected 01:11:06.632 [2024-07-22 11:08:11.654973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc5010 (9): Bad file descriptor 01:11:06.632 [2024-07-22 11:08:11.655970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:11:06.632 [2024-07-22 11:08:11.655986] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 01:11:06.632 [2024-07-22 11:08:11.655997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:11:06.632 request: 01:11:06.632 { 01:11:06.632 "name": "TLSTEST", 01:11:06.632 "trtype": "tcp", 01:11:06.632 "traddr": "10.0.0.2", 01:11:06.632 "adrfam": "ipv4", 01:11:06.632 "trsvcid": "4420", 01:11:06.632 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:11:06.632 "hostnqn": "nqn.2016-06.io.spdk:host2", 01:11:06.632 "prchk_reftag": false, 01:11:06.632 "prchk_guard": false, 01:11:06.632 "hdgst": false, 01:11:06.632 "ddgst": false, 01:11:06.632 "psk": "/tmp/tmp.qYijhtpULj", 01:11:06.632 "method": "bdev_nvme_attach_controller", 01:11:06.632 "req_id": 1 01:11:06.632 } 01:11:06.632 Got JSON-RPC error response 01:11:06.632 response: 01:11:06.632 { 01:11:06.632 "code": -5, 01:11:06.632 "message": "Input/output error" 01:11:06.632 } 01:11:06.632 11:08:11 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 85264 01:11:06.632 11:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85264 ']' 01:11:06.632 11:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85264 01:11:06.632 11:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:11:06.632 11:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:11:06.632 11:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85264 01:11:06.632 11:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:11:06.632 11:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:11:06.632 11:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85264' 01:11:06.632 killing process with pid 85264 01:11:06.632 11:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85264 01:11:06.632 Received shutdown signal, test time was about 10.000000 seconds 01:11:06.632 01:11:06.632 Latency(us) 01:11:06.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:06.632 =================================================================================================================== 01:11:06.632 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:11:06.632 [2024-07-22 11:08:11.721280] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 01:11:06.632 11:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85264 01:11:06.890 11:08:11 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 01:11:06.890 11:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 01:11:06.890 11:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:11:06.890 11:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:11:06.890 11:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:11:06.890 11:08:11 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qYijhtpULj 01:11:06.890 11:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 01:11:06.890 11:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qYijhtpULj 01:11:06.890 11:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 01:11:06.890 11:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:11:06.890 11:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 01:11:06.890 11:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:11:06.890 11:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qYijhtpULj 01:11:06.890 11:08:11 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:11:06.890 11:08:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 01:11:06.890 11:08:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:11:06.890 11:08:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.qYijhtpULj' 01:11:06.890 11:08:11 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:11:06.890 11:08:11 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:11:06.890 11:08:11 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85286 01:11:06.890 11:08:11 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:11:06.890 11:08:11 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85286 /var/tmp/bdevperf.sock 01:11:06.890 11:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85286 ']' 01:11:06.890 11:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:11:06.891 11:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:11:06.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:11:06.891 11:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:11:06.891 11:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:11:06.891 11:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:06.891 [2024-07-22 11:08:11.933132] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:11:06.891 [2024-07-22 11:08:11.933194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85286 ] 01:11:06.891 [2024-07-22 11:08:12.065216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:07.148 [2024-07-22 11:08:12.108915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:11:07.148 [2024-07-22 11:08:12.149828] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:11:07.714 11:08:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:11:07.714 11:08:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:11:07.714 11:08:12 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qYijhtpULj 01:11:07.976 [2024-07-22 11:08:12.957028] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:11:07.976 [2024-07-22 11:08:12.957116] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 01:11:07.976 [2024-07-22 11:08:12.961389] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 01:11:07.976 [2024-07-22 11:08:12.961421] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 01:11:07.976 [2024-07-22 11:08:12.961463] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:11:07.976 [2024-07-22 11:08:12.962168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e5010 (107): Transport endpoint is not connected 01:11:07.976 [2024-07-22 11:08:12.963154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e5010 (9): Bad file descriptor 01:11:07.976 [2024-07-22 11:08:12.964151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 01:11:07.976 [2024-07-22 11:08:12.964167] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 01:11:07.976 [2024-07-22 11:08:12.964179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 01:11:07.976 request: 01:11:07.976 { 01:11:07.976 "name": "TLSTEST", 01:11:07.976 "trtype": "tcp", 01:11:07.976 "traddr": "10.0.0.2", 01:11:07.976 "adrfam": "ipv4", 01:11:07.976 "trsvcid": "4420", 01:11:07.976 "subnqn": "nqn.2016-06.io.spdk:cnode2", 01:11:07.976 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:11:07.976 "prchk_reftag": false, 01:11:07.976 "prchk_guard": false, 01:11:07.976 "hdgst": false, 01:11:07.976 "ddgst": false, 01:11:07.976 "psk": "/tmp/tmp.qYijhtpULj", 01:11:07.976 "method": "bdev_nvme_attach_controller", 01:11:07.976 "req_id": 1 01:11:07.976 } 01:11:07.976 Got JSON-RPC error response 01:11:07.976 response: 01:11:07.976 { 01:11:07.976 "code": -5, 01:11:07.976 "message": "Input/output error" 01:11:07.976 } 01:11:07.976 11:08:12 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 85286 01:11:07.976 11:08:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85286 ']' 01:11:07.976 11:08:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85286 01:11:07.976 11:08:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:11:07.976 11:08:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:11:07.976 11:08:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85286 01:11:07.976 killing process with pid 85286 01:11:07.976 Received shutdown signal, test time was about 10.000000 seconds 01:11:07.976 01:11:07.976 Latency(us) 01:11:07.976 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:07.976 =================================================================================================================== 01:11:07.976 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:11:07.976 11:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:11:07.976 11:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:11:07.976 11:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85286' 01:11:07.976 11:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85286 01:11:07.976 [2024-07-22 11:08:13.021890] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 01:11:07.976 11:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85286 01:11:08.233 11:08:13 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 01:11:08.233 11:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 01:11:08.233 11:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:11:08.233 11:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:11:08.233 11:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:11:08.233 11:08:13 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 01:11:08.233 11:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 01:11:08.233 11:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 01:11:08.233 11:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 01:11:08.234 11:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:11:08.234 11:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 01:11:08.234 11:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:11:08.234 11:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 01:11:08.234 11:08:13 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:11:08.234 11:08:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:11:08.234 11:08:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:11:08.234 11:08:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 01:11:08.234 11:08:13 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:11:08.234 11:08:13 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:11:08.234 11:08:13 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85309 01:11:08.234 11:08:13 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:11:08.234 11:08:13 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85309 /var/tmp/bdevperf.sock 01:11:08.234 11:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85309 ']' 01:11:08.234 11:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:11:08.234 11:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:11:08.234 11:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:11:08.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:11:08.234 11:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:11:08.234 11:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:08.234 [2024-07-22 11:08:13.233167] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:11:08.234 [2024-07-22 11:08:13.233227] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85309 ] 01:11:08.234 [2024-07-22 11:08:13.365121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:08.234 [2024-07-22 11:08:13.407672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:11:08.491 [2024-07-22 11:08:13.448634] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:11:09.064 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:11:09.064 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:11:09.064 11:08:14 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 01:11:09.321 [2024-07-22 11:08:14.309415] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:11:09.321 [2024-07-22 11:08:14.311084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c8e60 (9): Bad file descriptor 01:11:09.321 [2024-07-22 11:08:14.312075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:11:09.321 [2024-07-22 11:08:14.312091] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 01:11:09.321 [2024-07-22 11:08:14.312103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:11:09.321 request: 01:11:09.321 { 01:11:09.321 "name": "TLSTEST", 01:11:09.321 "trtype": "tcp", 01:11:09.321 "traddr": "10.0.0.2", 01:11:09.321 "adrfam": "ipv4", 01:11:09.321 "trsvcid": "4420", 01:11:09.321 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:11:09.321 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:11:09.321 "prchk_reftag": false, 01:11:09.321 "prchk_guard": false, 01:11:09.321 "hdgst": false, 01:11:09.321 "ddgst": false, 01:11:09.321 "method": "bdev_nvme_attach_controller", 01:11:09.321 "req_id": 1 01:11:09.321 } 01:11:09.321 Got JSON-RPC error response 01:11:09.321 response: 01:11:09.321 { 01:11:09.321 "code": -5, 01:11:09.321 "message": "Input/output error" 01:11:09.321 } 01:11:09.321 11:08:14 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 85309 01:11:09.321 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85309 ']' 01:11:09.321 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85309 01:11:09.321 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:11:09.321 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:11:09.321 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85309 01:11:09.321 killing process with pid 85309 01:11:09.321 Received shutdown signal, test time was about 10.000000 seconds 01:11:09.321 01:11:09.321 Latency(us) 01:11:09.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:09.321 =================================================================================================================== 01:11:09.321 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:11:09.321 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:11:09.321 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:11:09.321 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85309' 01:11:09.321 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85309 01:11:09.321 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85309 01:11:09.579 11:08:14 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 01:11:09.579 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 01:11:09.579 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:11:09.579 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:11:09.579 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:11:09.579 11:08:14 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 84876 01:11:09.579 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84876 ']' 01:11:09.579 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84876 01:11:09.579 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:11:09.579 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:11:09.579 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84876 01:11:09.579 killing process with pid 84876 01:11:09.579 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:11:09.579 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:11:09.579 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84876' 01:11:09.579 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84876 01:11:09.579 [2024-07-22 11:08:14.579539] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 01:11:09.579 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84876 01:11:09.837 11:08:14 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 01:11:09.837 11:08:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 01:11:09.837 11:08:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 01:11:09.837 11:08:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 01:11:09.837 11:08:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 01:11:09.837 11:08:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 01:11:09.837 11:08:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 01:11:09.837 11:08:14 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 01:11:09.837 11:08:14 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 01:11:09.837 11:08:14 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.srjxyziNbl 01:11:09.837 11:08:14 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 01:11:09.837 11:08:14 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.srjxyziNbl 01:11:09.837 11:08:14 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 01:11:09.837 11:08:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:11:09.837 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 01:11:09.837 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:09.837 11:08:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85352 01:11:09.837 11:08:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:11:09.837 11:08:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85352 01:11:09.837 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85352 ']' 01:11:09.837 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:11:09.837 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:11:09.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:11:09.837 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:11:09.837 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:11:09.837 11:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:09.837 [2024-07-22 11:08:15.014032] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:11:09.837 [2024-07-22 11:08:15.014100] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:11:10.096 [2024-07-22 11:08:15.158554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:10.096 [2024-07-22 11:08:15.223102] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:11:10.096 [2024-07-22 11:08:15.223165] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:11:10.096 [2024-07-22 11:08:15.223176] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:11:10.096 [2024-07-22 11:08:15.223185] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:11:10.096 [2024-07-22 11:08:15.223192] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:11:10.096 [2024-07-22 11:08:15.223221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:11:10.096 [2024-07-22 11:08:15.295909] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:11:10.665 11:08:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:11:10.665 11:08:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:11:10.665 11:08:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:11:10.665 11:08:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 01:11:10.665 11:08:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:10.925 11:08:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:11:10.925 11:08:15 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.srjxyziNbl 01:11:10.925 11:08:15 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.srjxyziNbl 01:11:10.925 11:08:15 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:11:10.925 [2024-07-22 11:08:16.063682] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:11:10.925 11:08:16 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:11:11.184 11:08:16 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 01:11:11.443 [2024-07-22 11:08:16.447097] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:11:11.443 [2024-07-22 11:08:16.447341] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:11:11.443 11:08:16 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:11:11.443 malloc0 01:11:11.702 11:08:16 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:11:11.702 11:08:16 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.srjxyziNbl 01:11:11.962 [2024-07-22 11:08:17.000989] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 01:11:11.962 11:08:17 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.srjxyziNbl 01:11:11.962 11:08:17 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:11:11.962 11:08:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:11:11.962 11:08:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:11:11.962 11:08:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.srjxyziNbl' 01:11:11.962 11:08:17 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:11:11.962 11:08:17 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85401 01:11:11.962 11:08:17 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:11:11.962 11:08:17 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85401 /var/tmp/bdevperf.sock 01:11:11.962 11:08:17 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:11:11.962 11:08:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85401 ']' 01:11:11.962 11:08:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:11:11.962 11:08:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:11:11.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:11:11.962 11:08:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:11:11.962 11:08:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:11:11.962 11:08:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:11.962 [2024-07-22 11:08:17.068056] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:11:11.962 [2024-07-22 11:08:17.068126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85401 ] 01:11:12.221 [2024-07-22 11:08:17.208462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:12.221 [2024-07-22 11:08:17.250650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:11:12.221 [2024-07-22 11:08:17.291289] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:11:12.790 11:08:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:11:12.790 11:08:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:11:12.790 11:08:17 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.srjxyziNbl 01:11:13.049 [2024-07-22 11:08:18.058479] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:11:13.049 [2024-07-22 11:08:18.058572] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 01:11:13.049 TLSTESTn1 01:11:13.049 11:08:18 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 01:11:13.049 Running I/O for 10 seconds... 01:11:23.035 01:11:23.035 Latency(us) 01:11:23.035 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:23.035 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:11:23.035 Verification LBA range: start 0x0 length 0x2000 01:11:23.035 TLSTESTn1 : 10.01 5481.15 21.41 0.00 0.00 23318.14 4500.67 28846.37 01:11:23.035 =================================================================================================================== 01:11:23.035 Total : 5481.15 21.41 0.00 0.00 23318.14 4500.67 28846.37 01:11:23.035 0 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 85401 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85401 ']' 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85401 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85401 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:11:23.295 killing process with pid 85401 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85401' 01:11:23.295 Received shutdown signal, test time was about 10.000000 seconds 01:11:23.295 01:11:23.295 Latency(us) 01:11:23.295 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:23.295 =================================================================================================================== 01:11:23.295 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85401 01:11:23.295 [2024-07-22 11:08:28.288137] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85401 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.srjxyziNbl 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.srjxyziNbl 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.srjxyziNbl 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.srjxyziNbl 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.srjxyziNbl' 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85531 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85531 /var/tmp/bdevperf.sock 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85531 ']' 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:11:23.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:11:23.295 11:08:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:23.555 [2024-07-22 11:08:28.523143] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:11:23.555 [2024-07-22 11:08:28.523211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85531 ] 01:11:23.555 [2024-07-22 11:08:28.668026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:23.555 [2024-07-22 11:08:28.709308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:11:23.555 [2024-07-22 11:08:28.750077] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:11:24.494 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:11:24.494 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:11:24.494 11:08:29 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.srjxyziNbl 01:11:24.494 [2024-07-22 11:08:29.533268] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:11:24.494 [2024-07-22 11:08:29.533333] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 01:11:24.494 [2024-07-22 11:08:29.533342] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.srjxyziNbl 01:11:24.494 request: 01:11:24.494 { 01:11:24.494 "name": "TLSTEST", 01:11:24.494 "trtype": "tcp", 01:11:24.494 "traddr": "10.0.0.2", 01:11:24.494 "adrfam": "ipv4", 01:11:24.494 "trsvcid": "4420", 01:11:24.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:11:24.494 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:11:24.494 "prchk_reftag": false, 01:11:24.494 "prchk_guard": false, 01:11:24.494 "hdgst": false, 01:11:24.494 "ddgst": false, 01:11:24.494 "psk": "/tmp/tmp.srjxyziNbl", 01:11:24.494 "method": "bdev_nvme_attach_controller", 01:11:24.494 "req_id": 1 01:11:24.494 } 01:11:24.494 Got JSON-RPC error response 01:11:24.494 response: 01:11:24.494 { 01:11:24.494 "code": -1, 01:11:24.494 "message": "Operation not permitted" 01:11:24.494 } 01:11:24.494 11:08:29 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 85531 01:11:24.494 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85531 ']' 01:11:24.494 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85531 01:11:24.494 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:11:24.494 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:11:24.494 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85531 01:11:24.494 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:11:24.494 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:11:24.494 killing process with pid 85531 01:11:24.494 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85531' 01:11:24.494 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85531 01:11:24.494 Received shutdown signal, test time was about 10.000000 seconds 01:11:24.494 01:11:24.494 Latency(us) 01:11:24.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:24.494 =================================================================================================================== 01:11:24.494 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:11:24.494 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85531 01:11:24.754 11:08:29 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 01:11:24.754 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 01:11:24.754 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:11:24.754 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:11:24.754 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:11:24.754 11:08:29 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 85352 01:11:24.754 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85352 ']' 01:11:24.754 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85352 01:11:24.754 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:11:24.754 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:11:24.754 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85352 01:11:24.754 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:11:24.754 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:11:24.754 killing process with pid 85352 01:11:24.754 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85352' 01:11:24.754 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85352 01:11:24.754 [2024-07-22 11:08:29.799125] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 01:11:24.754 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85352 01:11:25.014 11:08:29 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 01:11:25.014 11:08:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:11:25.014 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 01:11:25.014 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:25.014 11:08:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85563 01:11:25.014 11:08:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:11:25.014 11:08:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85563 01:11:25.014 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85563 ']' 01:11:25.014 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:11:25.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:11:25.014 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:11:25.014 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:11:25.014 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:11:25.014 11:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:25.014 [2024-07-22 11:08:30.048685] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:11:25.014 [2024-07-22 11:08:30.048751] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:11:25.014 [2024-07-22 11:08:30.192965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:25.273 [2024-07-22 11:08:30.234436] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:11:25.273 [2024-07-22 11:08:30.234474] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:11:25.273 [2024-07-22 11:08:30.234484] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:11:25.273 [2024-07-22 11:08:30.234492] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:11:25.273 [2024-07-22 11:08:30.234498] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:11:25.273 [2024-07-22 11:08:30.234528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:11:25.273 [2024-07-22 11:08:30.275281] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:11:25.843 11:08:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:11:25.843 11:08:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:11:25.843 11:08:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:11:25.843 11:08:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 01:11:25.843 11:08:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:25.843 11:08:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:11:25.843 11:08:30 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.srjxyziNbl 01:11:25.843 11:08:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 01:11:25.843 11:08:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.srjxyziNbl 01:11:25.843 11:08:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 01:11:25.843 11:08:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:11:25.843 11:08:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 01:11:25.843 11:08:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:11:25.843 11:08:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.srjxyziNbl 01:11:25.843 11:08:30 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.srjxyziNbl 01:11:25.843 11:08:30 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:11:26.102 [2024-07-22 11:08:31.122954] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:11:26.102 11:08:31 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:11:26.360 11:08:31 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 01:11:26.360 [2024-07-22 11:08:31.498373] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:11:26.360 [2024-07-22 11:08:31.498573] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:11:26.360 11:08:31 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:11:26.619 malloc0 01:11:26.619 11:08:31 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:11:26.877 11:08:31 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.srjxyziNbl 01:11:26.877 [2024-07-22 11:08:32.070372] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 01:11:26.877 [2024-07-22 11:08:32.070404] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 01:11:26.877 [2024-07-22 11:08:32.070432] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 01:11:26.877 request: 01:11:26.877 { 01:11:26.877 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:11:26.877 "host": "nqn.2016-06.io.spdk:host1", 01:11:26.877 "psk": "/tmp/tmp.srjxyziNbl", 01:11:26.877 "method": "nvmf_subsystem_add_host", 01:11:26.877 "req_id": 1 01:11:26.877 } 01:11:26.877 Got JSON-RPC error response 01:11:26.877 response: 01:11:26.877 { 01:11:26.877 "code": -32603, 01:11:26.877 "message": "Internal error" 01:11:26.877 } 01:11:27.135 11:08:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 01:11:27.135 11:08:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:11:27.135 11:08:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:11:27.135 11:08:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:11:27.135 11:08:32 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 85563 01:11:27.135 11:08:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85563 ']' 01:11:27.135 11:08:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85563 01:11:27.135 11:08:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:11:27.135 11:08:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:11:27.135 11:08:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85563 01:11:27.135 11:08:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:11:27.135 11:08:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:11:27.135 killing process with pid 85563 01:11:27.135 11:08:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85563' 01:11:27.135 11:08:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85563 01:11:27.135 11:08:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85563 01:11:27.135 11:08:32 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.srjxyziNbl 01:11:27.135 11:08:32 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 01:11:27.135 11:08:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:11:27.135 11:08:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 01:11:27.135 11:08:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:27.135 11:08:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85621 01:11:27.135 11:08:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:11:27.135 11:08:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85621 01:11:27.135 11:08:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85621 ']' 01:11:27.135 11:08:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:11:27.135 11:08:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:11:27.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:11:27.135 11:08:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:11:27.135 11:08:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:11:27.135 11:08:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:27.393 [2024-07-22 11:08:32.377666] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:11:27.393 [2024-07-22 11:08:32.377728] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:11:27.393 [2024-07-22 11:08:32.521304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:27.393 [2024-07-22 11:08:32.562965] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:11:27.393 [2024-07-22 11:08:32.563008] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:11:27.393 [2024-07-22 11:08:32.563018] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:11:27.393 [2024-07-22 11:08:32.563026] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:11:27.393 [2024-07-22 11:08:32.563033] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:11:27.393 [2024-07-22 11:08:32.563056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:11:27.651 [2024-07-22 11:08:32.604178] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:11:28.218 11:08:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:11:28.218 11:08:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:11:28.218 11:08:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:11:28.218 11:08:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 01:11:28.218 11:08:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:28.218 11:08:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:11:28.218 11:08:33 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.srjxyziNbl 01:11:28.218 11:08:33 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.srjxyziNbl 01:11:28.218 11:08:33 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:11:28.476 [2024-07-22 11:08:33.452294] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:11:28.476 11:08:33 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:11:28.476 11:08:33 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 01:11:28.734 [2024-07-22 11:08:33.847723] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:11:28.734 [2024-07-22 11:08:33.847923] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:11:28.734 11:08:33 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:11:28.991 malloc0 01:11:28.991 11:08:34 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:11:29.249 11:08:34 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.srjxyziNbl 01:11:29.249 [2024-07-22 11:08:34.427948] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 01:11:29.249 11:08:34 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:11:29.249 11:08:34 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=85677 01:11:29.249 11:08:34 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:11:29.249 11:08:34 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 85677 /var/tmp/bdevperf.sock 01:11:29.249 11:08:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85677 ']' 01:11:29.249 11:08:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:11:29.249 11:08:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:11:29.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:11:29.249 11:08:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:11:29.249 11:08:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:11:29.249 11:08:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:29.508 [2024-07-22 11:08:34.476929] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:11:29.508 [2024-07-22 11:08:34.476990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85677 ] 01:11:29.508 [2024-07-22 11:08:34.620284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:29.508 [2024-07-22 11:08:34.666465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:11:29.508 [2024-07-22 11:08:34.709681] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:11:30.492 11:08:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:11:30.492 11:08:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:11:30.492 11:08:35 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.srjxyziNbl 01:11:30.492 [2024-07-22 11:08:35.513154] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:11:30.492 [2024-07-22 11:08:35.513286] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 01:11:30.492 TLSTESTn1 01:11:30.492 11:08:35 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 01:11:30.750 11:08:35 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 01:11:30.750 "subsystems": [ 01:11:30.750 { 01:11:30.750 "subsystem": "keyring", 01:11:30.750 "config": [] 01:11:30.750 }, 01:11:30.750 { 01:11:30.750 "subsystem": "iobuf", 01:11:30.750 "config": [ 01:11:30.750 { 01:11:30.750 "method": "iobuf_set_options", 01:11:30.750 "params": { 01:11:30.750 "small_pool_count": 8192, 01:11:30.750 "large_pool_count": 1024, 01:11:30.750 "small_bufsize": 8192, 01:11:30.750 "large_bufsize": 135168 01:11:30.750 } 01:11:30.750 } 01:11:30.750 ] 01:11:30.750 }, 01:11:30.750 { 01:11:30.750 "subsystem": "sock", 01:11:30.750 "config": [ 01:11:30.750 { 01:11:30.750 "method": "sock_set_default_impl", 01:11:30.750 "params": { 01:11:30.750 "impl_name": "uring" 01:11:30.750 } 01:11:30.750 }, 01:11:30.750 { 01:11:30.750 "method": "sock_impl_set_options", 01:11:30.750 "params": { 01:11:30.750 "impl_name": "ssl", 01:11:30.750 "recv_buf_size": 4096, 01:11:30.750 "send_buf_size": 4096, 01:11:30.750 "enable_recv_pipe": true, 01:11:30.750 "enable_quickack": false, 01:11:30.750 "enable_placement_id": 0, 01:11:30.750 "enable_zerocopy_send_server": true, 01:11:30.750 "enable_zerocopy_send_client": false, 01:11:30.750 "zerocopy_threshold": 0, 01:11:30.750 "tls_version": 0, 01:11:30.750 "enable_ktls": false 01:11:30.750 } 01:11:30.750 }, 01:11:30.750 { 01:11:30.750 "method": "sock_impl_set_options", 01:11:30.750 "params": { 01:11:30.750 "impl_name": "posix", 01:11:30.750 "recv_buf_size": 2097152, 01:11:30.750 "send_buf_size": 2097152, 01:11:30.750 "enable_recv_pipe": true, 01:11:30.750 "enable_quickack": false, 01:11:30.750 "enable_placement_id": 0, 01:11:30.750 "enable_zerocopy_send_server": true, 01:11:30.750 "enable_zerocopy_send_client": false, 01:11:30.750 "zerocopy_threshold": 0, 01:11:30.750 "tls_version": 0, 01:11:30.750 "enable_ktls": false 01:11:30.750 } 01:11:30.750 }, 01:11:30.750 { 01:11:30.750 "method": "sock_impl_set_options", 01:11:30.750 "params": { 01:11:30.750 "impl_name": "uring", 01:11:30.750 "recv_buf_size": 2097152, 01:11:30.750 "send_buf_size": 2097152, 01:11:30.750 "enable_recv_pipe": true, 01:11:30.750 "enable_quickack": false, 01:11:30.750 "enable_placement_id": 0, 01:11:30.750 "enable_zerocopy_send_server": false, 01:11:30.750 "enable_zerocopy_send_client": false, 01:11:30.750 "zerocopy_threshold": 0, 01:11:30.750 "tls_version": 0, 01:11:30.750 "enable_ktls": false 01:11:30.750 } 01:11:30.750 } 01:11:30.750 ] 01:11:30.750 }, 01:11:30.750 { 01:11:30.750 "subsystem": "vmd", 01:11:30.750 "config": [] 01:11:30.750 }, 01:11:30.750 { 01:11:30.750 "subsystem": "accel", 01:11:30.750 "config": [ 01:11:30.750 { 01:11:30.750 "method": "accel_set_options", 01:11:30.750 "params": { 01:11:30.750 "small_cache_size": 128, 01:11:30.750 "large_cache_size": 16, 01:11:30.750 "task_count": 2048, 01:11:30.750 "sequence_count": 2048, 01:11:30.750 "buf_count": 2048 01:11:30.750 } 01:11:30.750 } 01:11:30.750 ] 01:11:30.750 }, 01:11:30.750 { 01:11:30.750 "subsystem": "bdev", 01:11:30.750 "config": [ 01:11:30.750 { 01:11:30.750 "method": "bdev_set_options", 01:11:30.750 "params": { 01:11:30.750 "bdev_io_pool_size": 65535, 01:11:30.750 "bdev_io_cache_size": 256, 01:11:30.750 "bdev_auto_examine": true, 01:11:30.750 "iobuf_small_cache_size": 128, 01:11:30.750 "iobuf_large_cache_size": 16 01:11:30.750 } 01:11:30.750 }, 01:11:30.750 { 01:11:30.750 "method": "bdev_raid_set_options", 01:11:30.750 "params": { 01:11:30.750 "process_window_size_kb": 1024, 01:11:30.750 "process_max_bandwidth_mb_sec": 0 01:11:30.750 } 01:11:30.750 }, 01:11:30.750 { 01:11:30.750 "method": "bdev_iscsi_set_options", 01:11:30.750 "params": { 01:11:30.750 "timeout_sec": 30 01:11:30.750 } 01:11:30.750 }, 01:11:30.750 { 01:11:30.750 "method": "bdev_nvme_set_options", 01:11:30.750 "params": { 01:11:30.750 "action_on_timeout": "none", 01:11:30.750 "timeout_us": 0, 01:11:30.750 "timeout_admin_us": 0, 01:11:30.750 "keep_alive_timeout_ms": 10000, 01:11:30.750 "arbitration_burst": 0, 01:11:30.750 "low_priority_weight": 0, 01:11:30.750 "medium_priority_weight": 0, 01:11:30.750 "high_priority_weight": 0, 01:11:30.750 "nvme_adminq_poll_period_us": 10000, 01:11:30.750 "nvme_ioq_poll_period_us": 0, 01:11:30.750 "io_queue_requests": 0, 01:11:30.750 "delay_cmd_submit": true, 01:11:30.750 "transport_retry_count": 4, 01:11:30.750 "bdev_retry_count": 3, 01:11:30.750 "transport_ack_timeout": 0, 01:11:30.750 "ctrlr_loss_timeout_sec": 0, 01:11:30.750 "reconnect_delay_sec": 0, 01:11:30.750 "fast_io_fail_timeout_sec": 0, 01:11:30.751 "disable_auto_failback": false, 01:11:30.751 "generate_uuids": false, 01:11:30.751 "transport_tos": 0, 01:11:30.751 "nvme_error_stat": false, 01:11:30.751 "rdma_srq_size": 0, 01:11:30.751 "io_path_stat": false, 01:11:30.751 "allow_accel_sequence": false, 01:11:30.751 "rdma_max_cq_size": 0, 01:11:30.751 "rdma_cm_event_timeout_ms": 0, 01:11:30.751 "dhchap_digests": [ 01:11:30.751 "sha256", 01:11:30.751 "sha384", 01:11:30.751 "sha512" 01:11:30.751 ], 01:11:30.751 "dhchap_dhgroups": [ 01:11:30.751 "null", 01:11:30.751 "ffdhe2048", 01:11:30.751 "ffdhe3072", 01:11:30.751 "ffdhe4096", 01:11:30.751 "ffdhe6144", 01:11:30.751 "ffdhe8192" 01:11:30.751 ] 01:11:30.751 } 01:11:30.751 }, 01:11:30.751 { 01:11:30.751 "method": "bdev_nvme_set_hotplug", 01:11:30.751 "params": { 01:11:30.751 "period_us": 100000, 01:11:30.751 "enable": false 01:11:30.751 } 01:11:30.751 }, 01:11:30.751 { 01:11:30.751 "method": "bdev_malloc_create", 01:11:30.751 "params": { 01:11:30.751 "name": "malloc0", 01:11:30.751 "num_blocks": 8192, 01:11:30.751 "block_size": 4096, 01:11:30.751 "physical_block_size": 4096, 01:11:30.751 "uuid": "aa9d0b4e-cf57-4c0d-bbf5-28cf9647d62d", 01:11:30.751 "optimal_io_boundary": 0 01:11:30.751 } 01:11:30.751 }, 01:11:30.751 { 01:11:30.751 "method": "bdev_wait_for_examine" 01:11:30.751 } 01:11:30.751 ] 01:11:30.751 }, 01:11:30.751 { 01:11:30.751 "subsystem": "nbd", 01:11:30.751 "config": [] 01:11:30.751 }, 01:11:30.751 { 01:11:30.751 "subsystem": "scheduler", 01:11:30.751 "config": [ 01:11:30.751 { 01:11:30.751 "method": "framework_set_scheduler", 01:11:30.751 "params": { 01:11:30.751 "name": "static" 01:11:30.751 } 01:11:30.751 } 01:11:30.751 ] 01:11:30.751 }, 01:11:30.751 { 01:11:30.751 "subsystem": "nvmf", 01:11:30.751 "config": [ 01:11:30.751 { 01:11:30.751 "method": "nvmf_set_config", 01:11:30.751 "params": { 01:11:30.751 "discovery_filter": "match_any", 01:11:30.751 "admin_cmd_passthru": { 01:11:30.751 "identify_ctrlr": false 01:11:30.751 } 01:11:30.751 } 01:11:30.751 }, 01:11:30.751 { 01:11:30.751 "method": "nvmf_set_max_subsystems", 01:11:30.751 "params": { 01:11:30.751 "max_subsystems": 1024 01:11:30.751 } 01:11:30.751 }, 01:11:30.751 { 01:11:30.751 "method": "nvmf_set_crdt", 01:11:30.751 "params": { 01:11:30.751 "crdt1": 0, 01:11:30.751 "crdt2": 0, 01:11:30.751 "crdt3": 0 01:11:30.751 } 01:11:30.751 }, 01:11:30.751 { 01:11:30.751 "method": "nvmf_create_transport", 01:11:30.751 "params": { 01:11:30.751 "trtype": "TCP", 01:11:30.751 "max_queue_depth": 128, 01:11:30.751 "max_io_qpairs_per_ctrlr": 127, 01:11:30.751 "in_capsule_data_size": 4096, 01:11:30.751 "max_io_size": 131072, 01:11:30.751 "io_unit_size": 131072, 01:11:30.751 "max_aq_depth": 128, 01:11:30.751 "num_shared_buffers": 511, 01:11:30.751 "buf_cache_size": 4294967295, 01:11:30.751 "dif_insert_or_strip": false, 01:11:30.751 "zcopy": false, 01:11:30.751 "c2h_success": false, 01:11:30.751 "sock_priority": 0, 01:11:30.751 "abort_timeout_sec": 1, 01:11:30.751 "ack_timeout": 0, 01:11:30.751 "data_wr_pool_size": 0 01:11:30.751 } 01:11:30.751 }, 01:11:30.751 { 01:11:30.751 "method": "nvmf_create_subsystem", 01:11:30.751 "params": { 01:11:30.751 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:11:30.751 "allow_any_host": false, 01:11:30.751 "serial_number": "SPDK00000000000001", 01:11:30.751 "model_number": "SPDK bdev Controller", 01:11:30.751 "max_namespaces": 10, 01:11:30.751 "min_cntlid": 1, 01:11:30.751 "max_cntlid": 65519, 01:11:30.751 "ana_reporting": false 01:11:30.751 } 01:11:30.751 }, 01:11:30.751 { 01:11:30.751 "method": "nvmf_subsystem_add_host", 01:11:30.751 "params": { 01:11:30.751 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:11:30.751 "host": "nqn.2016-06.io.spdk:host1", 01:11:30.751 "psk": "/tmp/tmp.srjxyziNbl" 01:11:30.751 } 01:11:30.751 }, 01:11:30.751 { 01:11:30.751 "method": "nvmf_subsystem_add_ns", 01:11:30.751 "params": { 01:11:30.751 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:11:30.751 "namespace": { 01:11:30.751 "nsid": 1, 01:11:30.751 "bdev_name": "malloc0", 01:11:30.751 "nguid": "AA9D0B4ECF574C0DBBF528CF9647D62D", 01:11:30.751 "uuid": "aa9d0b4e-cf57-4c0d-bbf5-28cf9647d62d", 01:11:30.751 "no_auto_visible": false 01:11:30.751 } 01:11:30.751 } 01:11:30.751 }, 01:11:30.751 { 01:11:30.751 "method": "nvmf_subsystem_add_listener", 01:11:30.751 "params": { 01:11:30.751 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:11:30.751 "listen_address": { 01:11:30.751 "trtype": "TCP", 01:11:30.751 "adrfam": "IPv4", 01:11:30.751 "traddr": "10.0.0.2", 01:11:30.751 "trsvcid": "4420" 01:11:30.751 }, 01:11:30.751 "secure_channel": true 01:11:30.751 } 01:11:30.751 } 01:11:30.751 ] 01:11:30.751 } 01:11:30.751 ] 01:11:30.751 }' 01:11:30.751 11:08:35 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 01:11:31.010 11:08:36 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 01:11:31.010 "subsystems": [ 01:11:31.010 { 01:11:31.010 "subsystem": "keyring", 01:11:31.010 "config": [] 01:11:31.010 }, 01:11:31.010 { 01:11:31.010 "subsystem": "iobuf", 01:11:31.010 "config": [ 01:11:31.010 { 01:11:31.010 "method": "iobuf_set_options", 01:11:31.010 "params": { 01:11:31.010 "small_pool_count": 8192, 01:11:31.010 "large_pool_count": 1024, 01:11:31.010 "small_bufsize": 8192, 01:11:31.010 "large_bufsize": 135168 01:11:31.010 } 01:11:31.010 } 01:11:31.010 ] 01:11:31.010 }, 01:11:31.010 { 01:11:31.010 "subsystem": "sock", 01:11:31.010 "config": [ 01:11:31.010 { 01:11:31.010 "method": "sock_set_default_impl", 01:11:31.010 "params": { 01:11:31.010 "impl_name": "uring" 01:11:31.010 } 01:11:31.010 }, 01:11:31.010 { 01:11:31.010 "method": "sock_impl_set_options", 01:11:31.010 "params": { 01:11:31.010 "impl_name": "ssl", 01:11:31.010 "recv_buf_size": 4096, 01:11:31.010 "send_buf_size": 4096, 01:11:31.010 "enable_recv_pipe": true, 01:11:31.010 "enable_quickack": false, 01:11:31.010 "enable_placement_id": 0, 01:11:31.010 "enable_zerocopy_send_server": true, 01:11:31.010 "enable_zerocopy_send_client": false, 01:11:31.010 "zerocopy_threshold": 0, 01:11:31.010 "tls_version": 0, 01:11:31.010 "enable_ktls": false 01:11:31.010 } 01:11:31.010 }, 01:11:31.010 { 01:11:31.010 "method": "sock_impl_set_options", 01:11:31.010 "params": { 01:11:31.010 "impl_name": "posix", 01:11:31.010 "recv_buf_size": 2097152, 01:11:31.010 "send_buf_size": 2097152, 01:11:31.010 "enable_recv_pipe": true, 01:11:31.010 "enable_quickack": false, 01:11:31.010 "enable_placement_id": 0, 01:11:31.010 "enable_zerocopy_send_server": true, 01:11:31.010 "enable_zerocopy_send_client": false, 01:11:31.010 "zerocopy_threshold": 0, 01:11:31.010 "tls_version": 0, 01:11:31.010 "enable_ktls": false 01:11:31.010 } 01:11:31.010 }, 01:11:31.010 { 01:11:31.010 "method": "sock_impl_set_options", 01:11:31.010 "params": { 01:11:31.010 "impl_name": "uring", 01:11:31.010 "recv_buf_size": 2097152, 01:11:31.010 "send_buf_size": 2097152, 01:11:31.010 "enable_recv_pipe": true, 01:11:31.010 "enable_quickack": false, 01:11:31.010 "enable_placement_id": 0, 01:11:31.010 "enable_zerocopy_send_server": false, 01:11:31.010 "enable_zerocopy_send_client": false, 01:11:31.010 "zerocopy_threshold": 0, 01:11:31.010 "tls_version": 0, 01:11:31.010 "enable_ktls": false 01:11:31.010 } 01:11:31.010 } 01:11:31.010 ] 01:11:31.010 }, 01:11:31.010 { 01:11:31.010 "subsystem": "vmd", 01:11:31.010 "config": [] 01:11:31.010 }, 01:11:31.010 { 01:11:31.010 "subsystem": "accel", 01:11:31.010 "config": [ 01:11:31.010 { 01:11:31.010 "method": "accel_set_options", 01:11:31.010 "params": { 01:11:31.010 "small_cache_size": 128, 01:11:31.010 "large_cache_size": 16, 01:11:31.010 "task_count": 2048, 01:11:31.010 "sequence_count": 2048, 01:11:31.010 "buf_count": 2048 01:11:31.010 } 01:11:31.010 } 01:11:31.010 ] 01:11:31.010 }, 01:11:31.010 { 01:11:31.010 "subsystem": "bdev", 01:11:31.010 "config": [ 01:11:31.010 { 01:11:31.010 "method": "bdev_set_options", 01:11:31.010 "params": { 01:11:31.010 "bdev_io_pool_size": 65535, 01:11:31.010 "bdev_io_cache_size": 256, 01:11:31.010 "bdev_auto_examine": true, 01:11:31.010 "iobuf_small_cache_size": 128, 01:11:31.010 "iobuf_large_cache_size": 16 01:11:31.010 } 01:11:31.010 }, 01:11:31.010 { 01:11:31.010 "method": "bdev_raid_set_options", 01:11:31.010 "params": { 01:11:31.010 "process_window_size_kb": 1024, 01:11:31.010 "process_max_bandwidth_mb_sec": 0 01:11:31.010 } 01:11:31.010 }, 01:11:31.010 { 01:11:31.010 "method": "bdev_iscsi_set_options", 01:11:31.010 "params": { 01:11:31.010 "timeout_sec": 30 01:11:31.010 } 01:11:31.010 }, 01:11:31.010 { 01:11:31.010 "method": "bdev_nvme_set_options", 01:11:31.010 "params": { 01:11:31.010 "action_on_timeout": "none", 01:11:31.010 "timeout_us": 0, 01:11:31.010 "timeout_admin_us": 0, 01:11:31.010 "keep_alive_timeout_ms": 10000, 01:11:31.010 "arbitration_burst": 0, 01:11:31.010 "low_priority_weight": 0, 01:11:31.010 "medium_priority_weight": 0, 01:11:31.010 "high_priority_weight": 0, 01:11:31.010 "nvme_adminq_poll_period_us": 10000, 01:11:31.010 "nvme_ioq_poll_period_us": 0, 01:11:31.010 "io_queue_requests": 512, 01:11:31.010 "delay_cmd_submit": true, 01:11:31.010 "transport_retry_count": 4, 01:11:31.010 "bdev_retry_count": 3, 01:11:31.010 "transport_ack_timeout": 0, 01:11:31.010 "ctrlr_loss_timeout_sec": 0, 01:11:31.010 "reconnect_delay_sec": 0, 01:11:31.010 "fast_io_fail_timeout_sec": 0, 01:11:31.010 "disable_auto_failback": false, 01:11:31.010 "generate_uuids": false, 01:11:31.010 "transport_tos": 0, 01:11:31.010 "nvme_error_stat": false, 01:11:31.010 "rdma_srq_size": 0, 01:11:31.010 "io_path_stat": false, 01:11:31.010 "allow_accel_sequence": false, 01:11:31.010 "rdma_max_cq_size": 0, 01:11:31.010 "rdma_cm_event_timeout_ms": 0, 01:11:31.010 "dhchap_digests": [ 01:11:31.010 "sha256", 01:11:31.010 "sha384", 01:11:31.010 "sha512" 01:11:31.010 ], 01:11:31.010 "dhchap_dhgroups": [ 01:11:31.010 "null", 01:11:31.011 "ffdhe2048", 01:11:31.011 "ffdhe3072", 01:11:31.011 "ffdhe4096", 01:11:31.011 "ffdhe6144", 01:11:31.011 "ffdhe8192" 01:11:31.011 ] 01:11:31.011 } 01:11:31.011 }, 01:11:31.011 { 01:11:31.011 "method": "bdev_nvme_attach_controller", 01:11:31.011 "params": { 01:11:31.011 "name": "TLSTEST", 01:11:31.011 "trtype": "TCP", 01:11:31.011 "adrfam": "IPv4", 01:11:31.011 "traddr": "10.0.0.2", 01:11:31.011 "trsvcid": "4420", 01:11:31.011 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:11:31.011 "prchk_reftag": false, 01:11:31.011 "prchk_guard": false, 01:11:31.011 "ctrlr_loss_timeout_sec": 0, 01:11:31.011 "reconnect_delay_sec": 0, 01:11:31.011 "fast_io_fail_timeout_sec": 0, 01:11:31.011 "psk": "/tmp/tmp.srjxyziNbl", 01:11:31.011 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:11:31.011 "hdgst": false, 01:11:31.011 "ddgst": false 01:11:31.011 } 01:11:31.011 }, 01:11:31.011 { 01:11:31.011 "method": "bdev_nvme_set_hotplug", 01:11:31.011 "params": { 01:11:31.011 "period_us": 100000, 01:11:31.011 "enable": false 01:11:31.011 } 01:11:31.011 }, 01:11:31.011 { 01:11:31.011 "method": "bdev_wait_for_examine" 01:11:31.011 } 01:11:31.011 ] 01:11:31.011 }, 01:11:31.011 { 01:11:31.011 "subsystem": "nbd", 01:11:31.011 "config": [] 01:11:31.011 } 01:11:31.011 ] 01:11:31.011 }' 01:11:31.011 11:08:36 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 85677 01:11:31.011 11:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85677 ']' 01:11:31.011 11:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85677 01:11:31.011 11:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:11:31.011 11:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:11:31.011 11:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85677 01:11:31.011 11:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:11:31.011 11:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:11:31.011 killing process with pid 85677 01:11:31.011 11:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85677' 01:11:31.011 Received shutdown signal, test time was about 10.000000 seconds 01:11:31.011 01:11:31.011 Latency(us) 01:11:31.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:31.011 =================================================================================================================== 01:11:31.011 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:11:31.011 11:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85677 01:11:31.011 [2024-07-22 11:08:36.187958] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 01:11:31.011 11:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85677 01:11:31.270 11:08:36 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 85621 01:11:31.270 11:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85621 ']' 01:11:31.270 11:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85621 01:11:31.270 11:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:11:31.270 11:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:11:31.270 11:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85621 01:11:31.270 11:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:11:31.270 11:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:11:31.270 killing process with pid 85621 01:11:31.270 11:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85621' 01:11:31.270 11:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85621 01:11:31.270 [2024-07-22 11:08:36.406140] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 01:11:31.270 11:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85621 01:11:31.529 11:08:36 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 01:11:31.529 11:08:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:11:31.529 11:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 01:11:31.529 11:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:31.529 11:08:36 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 01:11:31.529 "subsystems": [ 01:11:31.529 { 01:11:31.529 "subsystem": "keyring", 01:11:31.529 "config": [] 01:11:31.529 }, 01:11:31.529 { 01:11:31.529 "subsystem": "iobuf", 01:11:31.529 "config": [ 01:11:31.529 { 01:11:31.529 "method": "iobuf_set_options", 01:11:31.529 "params": { 01:11:31.529 "small_pool_count": 8192, 01:11:31.529 "large_pool_count": 1024, 01:11:31.529 "small_bufsize": 8192, 01:11:31.529 "large_bufsize": 135168 01:11:31.529 } 01:11:31.529 } 01:11:31.529 ] 01:11:31.529 }, 01:11:31.529 { 01:11:31.529 "subsystem": "sock", 01:11:31.529 "config": [ 01:11:31.529 { 01:11:31.529 "method": "sock_set_default_impl", 01:11:31.529 "params": { 01:11:31.529 "impl_name": "uring" 01:11:31.529 } 01:11:31.529 }, 01:11:31.529 { 01:11:31.529 "method": "sock_impl_set_options", 01:11:31.529 "params": { 01:11:31.529 "impl_name": "ssl", 01:11:31.529 "recv_buf_size": 4096, 01:11:31.529 "send_buf_size": 4096, 01:11:31.529 "enable_recv_pipe": true, 01:11:31.529 "enable_quickack": false, 01:11:31.529 "enable_placement_id": 0, 01:11:31.529 "enable_zerocopy_send_server": true, 01:11:31.529 "enable_zerocopy_send_client": false, 01:11:31.529 "zerocopy_threshold": 0, 01:11:31.529 "tls_version": 0, 01:11:31.529 "enable_ktls": false 01:11:31.529 } 01:11:31.529 }, 01:11:31.529 { 01:11:31.529 "method": "sock_impl_set_options", 01:11:31.529 "params": { 01:11:31.529 "impl_name": "posix", 01:11:31.529 "recv_buf_size": 2097152, 01:11:31.529 "send_buf_size": 2097152, 01:11:31.529 "enable_recv_pipe": true, 01:11:31.529 "enable_quickack": false, 01:11:31.529 "enable_placement_id": 0, 01:11:31.529 "enable_zerocopy_send_server": true, 01:11:31.529 "enable_zerocopy_send_client": false, 01:11:31.529 "zerocopy_threshold": 0, 01:11:31.529 "tls_version": 0, 01:11:31.529 "enable_ktls": false 01:11:31.529 } 01:11:31.529 }, 01:11:31.529 { 01:11:31.529 "method": "sock_impl_set_options", 01:11:31.529 "params": { 01:11:31.529 "impl_name": "uring", 01:11:31.529 "recv_buf_size": 2097152, 01:11:31.529 "send_buf_size": 2097152, 01:11:31.529 "enable_recv_pipe": true, 01:11:31.529 "enable_quickack": false, 01:11:31.529 "enable_placement_id": 0, 01:11:31.529 "enable_zerocopy_send_server": false, 01:11:31.529 "enable_zerocopy_send_client": false, 01:11:31.529 "zerocopy_threshold": 0, 01:11:31.529 "tls_version": 0, 01:11:31.529 "enable_ktls": false 01:11:31.529 } 01:11:31.529 } 01:11:31.529 ] 01:11:31.529 }, 01:11:31.529 { 01:11:31.529 "subsystem": "vmd", 01:11:31.529 "config": [] 01:11:31.529 }, 01:11:31.529 { 01:11:31.529 "subsystem": "accel", 01:11:31.529 "config": [ 01:11:31.529 { 01:11:31.529 "method": "accel_set_options", 01:11:31.529 "params": { 01:11:31.529 "small_cache_size": 128, 01:11:31.529 "large_cache_size": 16, 01:11:31.529 "task_count": 2048, 01:11:31.529 "sequence_count": 2048, 01:11:31.529 "buf_count": 2048 01:11:31.529 } 01:11:31.529 } 01:11:31.529 ] 01:11:31.529 }, 01:11:31.529 { 01:11:31.529 "subsystem": "bdev", 01:11:31.529 "config": [ 01:11:31.529 { 01:11:31.529 "method": "bdev_set_options", 01:11:31.529 "params": { 01:11:31.529 "bdev_io_pool_size": 65535, 01:11:31.529 "bdev_io_cache_size": 256, 01:11:31.529 "bdev_auto_examine": true, 01:11:31.529 "iobuf_small_cache_size": 128, 01:11:31.529 "iobuf_large_cache_size": 16 01:11:31.529 } 01:11:31.529 }, 01:11:31.529 { 01:11:31.529 "method": "bdev_raid_set_options", 01:11:31.529 "params": { 01:11:31.529 "process_window_size_kb": 1024, 01:11:31.529 "process_max_bandwidth_mb_sec": 0 01:11:31.529 } 01:11:31.529 }, 01:11:31.529 { 01:11:31.529 "method": "bdev_iscsi_set_options", 01:11:31.529 "params": { 01:11:31.529 "timeout_sec": 30 01:11:31.529 } 01:11:31.529 }, 01:11:31.529 { 01:11:31.529 "method": "bdev_nvme_set_options", 01:11:31.529 "params": { 01:11:31.529 "action_on_timeout": "none", 01:11:31.529 "timeout_us": 0, 01:11:31.529 "timeout_admin_us": 0, 01:11:31.529 "keep_alive_timeout_ms": 10000, 01:11:31.529 "arbitration_burst": 0, 01:11:31.529 "low_priority_weight": 0, 01:11:31.529 "medium_priority_weight": 0, 01:11:31.529 "high_priority_weight": 0, 01:11:31.529 "nvme_adminq_poll_period_us": 10000, 01:11:31.529 "nvme_ioq_poll_period_us": 0, 01:11:31.529 "io_queue_requests": 0, 01:11:31.529 "delay_cmd_submit": true, 01:11:31.529 "transport_retry_count": 4, 01:11:31.529 "bdev_retry_count": 3, 01:11:31.529 "transport_ack_timeout": 0, 01:11:31.529 "ctrlr_loss_timeout_sec": 0, 01:11:31.529 "reconnect_delay_sec": 0, 01:11:31.529 "fast_io_fail_timeout_sec": 0, 01:11:31.529 "disable_auto_failback": false, 01:11:31.529 "generate_uuids": false, 01:11:31.529 "transport_tos": 0, 01:11:31.529 "nvme_error_stat": false, 01:11:31.529 "rdma_srq_size": 0, 01:11:31.529 "io_path_stat": false, 01:11:31.529 "allow_accel_sequence": false, 01:11:31.529 "rdma_max_cq_size": 0, 01:11:31.529 "rdma_cm_event_timeout_ms": 0, 01:11:31.529 "dhchap_digests": [ 01:11:31.529 "sha256", 01:11:31.529 "sha384", 01:11:31.529 "sha512" 01:11:31.529 ], 01:11:31.529 "dhchap_dhgroups": [ 01:11:31.529 "null", 01:11:31.529 "ffdhe2048", 01:11:31.529 "ffdhe3072", 01:11:31.529 "ffdhe4096", 01:11:31.529 "ffdhe6144", 01:11:31.529 "ffdhe8192" 01:11:31.529 ] 01:11:31.529 } 01:11:31.529 }, 01:11:31.529 { 01:11:31.529 "method": "bdev_nvme_set_hotplug", 01:11:31.529 "params": { 01:11:31.529 "period_us": 100000, 01:11:31.529 "enable": false 01:11:31.529 } 01:11:31.529 }, 01:11:31.529 { 01:11:31.529 "method": "bdev_malloc_create", 01:11:31.529 "params": { 01:11:31.529 "name": "malloc0", 01:11:31.529 "num_blocks": 8192, 01:11:31.529 "block_size": 4096, 01:11:31.529 "physical_block_size": 4096, 01:11:31.529 "uuid": "aa9d0b4e-cf57-4c0d-bbf5-28cf9647d62d", 01:11:31.529 "optimal_io_boundary": 0 01:11:31.529 } 01:11:31.529 }, 01:11:31.529 { 01:11:31.529 "method": "bdev_wait_for_examine" 01:11:31.529 } 01:11:31.529 ] 01:11:31.529 }, 01:11:31.529 { 01:11:31.530 "subsystem": "nbd", 01:11:31.530 "config": [] 01:11:31.530 }, 01:11:31.530 { 01:11:31.530 "subsystem": "scheduler", 01:11:31.530 "config": [ 01:11:31.530 { 01:11:31.530 "method": "framework_set_scheduler", 01:11:31.530 "params": { 01:11:31.530 "name": "static" 01:11:31.530 } 01:11:31.530 } 01:11:31.530 ] 01:11:31.530 }, 01:11:31.530 { 01:11:31.530 "subsystem": "nvmf", 01:11:31.530 "config": [ 01:11:31.530 { 01:11:31.530 "method": "nvmf_set_config", 01:11:31.530 "params": { 01:11:31.530 "discovery_filter": "match_any", 01:11:31.530 "admin_cmd_passthru": { 01:11:31.530 "identify_ctrlr": false 01:11:31.530 } 01:11:31.530 } 01:11:31.530 }, 01:11:31.530 { 01:11:31.530 "method": "nvmf_set_max_subsystems", 01:11:31.530 "params": { 01:11:31.530 "max_subsystems": 1024 01:11:31.530 } 01:11:31.530 }, 01:11:31.530 { 01:11:31.530 "method": "nvmf_set_crdt", 01:11:31.530 "params": { 01:11:31.530 "crdt1": 0, 01:11:31.530 "crdt2": 0, 01:11:31.530 "crdt3": 0 01:11:31.530 } 01:11:31.530 }, 01:11:31.530 { 01:11:31.530 "method": "nvmf_create_transport", 01:11:31.530 "params": { 01:11:31.530 "trtype": "TCP", 01:11:31.530 "max_queue_depth": 128, 01:11:31.530 "max_io_qpairs_per_ctrlr": 127, 01:11:31.530 "in_capsule_data_size": 4096, 01:11:31.530 "max_io_size": 131072, 01:11:31.530 "io_unit_size": 131072, 01:11:31.530 "max_aq_depth": 128, 01:11:31.530 "num_shared_buffers": 511, 01:11:31.530 "buf_cache_size": 4294967295, 01:11:31.530 "dif_insert_or_strip": false, 01:11:31.530 "zcopy": false, 01:11:31.530 "c2h_success": false, 01:11:31.530 "sock_priority": 0, 01:11:31.530 "abort_timeout_sec": 1, 01:11:31.530 "ack_timeout": 0, 01:11:31.530 "data_wr_pool_size": 0 01:11:31.530 } 01:11:31.530 }, 01:11:31.530 { 01:11:31.530 "method": "nvmf_create_subsystem", 01:11:31.530 "params": { 01:11:31.530 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:11:31.530 "allow_any_host": false, 01:11:31.530 "serial_number": "SPDK00000000000001", 01:11:31.530 "model_number": "SPDK bdev Controller", 01:11:31.530 "max_namespaces": 10, 01:11:31.530 "min_cntlid": 1, 01:11:31.530 "max_cntlid": 65519, 01:11:31.530 "ana_reporting": false 01:11:31.530 } 01:11:31.530 }, 01:11:31.530 { 01:11:31.530 "method": "nvmf_subsystem_add_host", 01:11:31.530 "params": { 01:11:31.530 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:11:31.530 "host": "nqn.2016-06.io.spdk:host1", 01:11:31.530 "psk": "/tmp/tmp.srjxyziNbl" 01:11:31.530 } 01:11:31.530 }, 01:11:31.530 { 01:11:31.530 "method": "nvmf_subsystem_add_ns", 01:11:31.530 "params": { 01:11:31.530 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:11:31.530 "namespace": { 01:11:31.530 "nsid": 1, 01:11:31.530 "bdev_name": "malloc0", 01:11:31.530 "nguid": "AA9D0B4ECF574C0DBBF528CF9647D62D", 01:11:31.530 "uuid": "aa9d0b4e-cf57-4c0d-bbf5-28cf9647d62d", 01:11:31.530 "no_auto_visible": false 01:11:31.530 } 01:11:31.530 } 01:11:31.530 }, 01:11:31.530 { 01:11:31.530 "method": "nvmf_subsystem_add_listener", 01:11:31.530 "params": { 01:11:31.530 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:11:31.530 "listen_address": { 01:11:31.530 "trtype": "TCP", 01:11:31.530 "adrfam": "IPv4", 01:11:31.530 "traddr": "10.0.0.2", 01:11:31.530 "trsvcid": "4420" 01:11:31.530 }, 01:11:31.530 "secure_channel": true 01:11:31.530 } 01:11:31.530 } 01:11:31.530 ] 01:11:31.530 } 01:11:31.530 ] 01:11:31.530 }' 01:11:31.530 11:08:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 01:11:31.530 11:08:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85720 01:11:31.530 11:08:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85720 01:11:31.530 11:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85720 ']' 01:11:31.530 11:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:11:31.530 11:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:11:31.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:11:31.530 11:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:11:31.530 11:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:11:31.530 11:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:31.530 [2024-07-22 11:08:36.644637] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:11:31.530 [2024-07-22 11:08:36.644698] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:11:31.789 [2024-07-22 11:08:36.787520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:31.789 [2024-07-22 11:08:36.831615] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:11:31.789 [2024-07-22 11:08:36.831659] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:11:31.789 [2024-07-22 11:08:36.831669] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:11:31.789 [2024-07-22 11:08:36.831677] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:11:31.789 [2024-07-22 11:08:36.831684] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:11:31.789 [2024-07-22 11:08:36.831758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:11:31.789 [2024-07-22 11:08:36.986893] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:11:32.048 [2024-07-22 11:08:37.041253] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:11:32.048 [2024-07-22 11:08:37.057159] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 01:11:32.048 [2024-07-22 11:08:37.073146] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:11:32.048 [2024-07-22 11:08:37.073312] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:11:32.617 11:08:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:11:32.617 11:08:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:11:32.617 11:08:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:11:32.617 11:08:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 01:11:32.617 11:08:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:32.617 11:08:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:11:32.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:11:32.617 11:08:37 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=85752 01:11:32.617 11:08:37 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 85752 /var/tmp/bdevperf.sock 01:11:32.617 11:08:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85752 ']' 01:11:32.617 11:08:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:11:32.617 11:08:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:11:32.617 11:08:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:11:32.617 11:08:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:11:32.617 11:08:37 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 01:11:32.617 11:08:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:32.617 11:08:37 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 01:11:32.617 "subsystems": [ 01:11:32.617 { 01:11:32.617 "subsystem": "keyring", 01:11:32.617 "config": [] 01:11:32.617 }, 01:11:32.617 { 01:11:32.617 "subsystem": "iobuf", 01:11:32.617 "config": [ 01:11:32.617 { 01:11:32.617 "method": "iobuf_set_options", 01:11:32.617 "params": { 01:11:32.617 "small_pool_count": 8192, 01:11:32.617 "large_pool_count": 1024, 01:11:32.617 "small_bufsize": 8192, 01:11:32.617 "large_bufsize": 135168 01:11:32.617 } 01:11:32.617 } 01:11:32.617 ] 01:11:32.617 }, 01:11:32.617 { 01:11:32.617 "subsystem": "sock", 01:11:32.617 "config": [ 01:11:32.617 { 01:11:32.617 "method": "sock_set_default_impl", 01:11:32.617 "params": { 01:11:32.617 "impl_name": "uring" 01:11:32.617 } 01:11:32.617 }, 01:11:32.617 { 01:11:32.617 "method": "sock_impl_set_options", 01:11:32.617 "params": { 01:11:32.617 "impl_name": "ssl", 01:11:32.617 "recv_buf_size": 4096, 01:11:32.617 "send_buf_size": 4096, 01:11:32.617 "enable_recv_pipe": true, 01:11:32.617 "enable_quickack": false, 01:11:32.617 "enable_placement_id": 0, 01:11:32.617 "enable_zerocopy_send_server": true, 01:11:32.617 "enable_zerocopy_send_client": false, 01:11:32.617 "zerocopy_threshold": 0, 01:11:32.617 "tls_version": 0, 01:11:32.617 "enable_ktls": false 01:11:32.617 } 01:11:32.617 }, 01:11:32.617 { 01:11:32.617 "method": "sock_impl_set_options", 01:11:32.617 "params": { 01:11:32.617 "impl_name": "posix", 01:11:32.617 "recv_buf_size": 2097152, 01:11:32.617 "send_buf_size": 2097152, 01:11:32.617 "enable_recv_pipe": true, 01:11:32.617 "enable_quickack": false, 01:11:32.617 "enable_placement_id": 0, 01:11:32.617 "enable_zerocopy_send_server": true, 01:11:32.617 "enable_zerocopy_send_client": false, 01:11:32.617 "zerocopy_threshold": 0, 01:11:32.617 "tls_version": 0, 01:11:32.617 "enable_ktls": false 01:11:32.617 } 01:11:32.617 }, 01:11:32.617 { 01:11:32.617 "method": "sock_impl_set_options", 01:11:32.617 "params": { 01:11:32.617 "impl_name": "uring", 01:11:32.617 "recv_buf_size": 2097152, 01:11:32.617 "send_buf_size": 2097152, 01:11:32.617 "enable_recv_pipe": true, 01:11:32.617 "enable_quickack": false, 01:11:32.617 "enable_placement_id": 0, 01:11:32.617 "enable_zerocopy_send_server": false, 01:11:32.617 "enable_zerocopy_send_client": false, 01:11:32.617 "zerocopy_threshold": 0, 01:11:32.617 "tls_version": 0, 01:11:32.617 "enable_ktls": false 01:11:32.617 } 01:11:32.617 } 01:11:32.617 ] 01:11:32.617 }, 01:11:32.617 { 01:11:32.617 "subsystem": "vmd", 01:11:32.617 "config": [] 01:11:32.617 }, 01:11:32.617 { 01:11:32.617 "subsystem": "accel", 01:11:32.617 "config": [ 01:11:32.617 { 01:11:32.617 "method": "accel_set_options", 01:11:32.617 "params": { 01:11:32.617 "small_cache_size": 128, 01:11:32.617 "large_cache_size": 16, 01:11:32.617 "task_count": 2048, 01:11:32.617 "sequence_count": 2048, 01:11:32.617 "buf_count": 2048 01:11:32.617 } 01:11:32.617 } 01:11:32.617 ] 01:11:32.617 }, 01:11:32.617 { 01:11:32.617 "subsystem": "bdev", 01:11:32.617 "config": [ 01:11:32.617 { 01:11:32.617 "method": "bdev_set_options", 01:11:32.617 "params": { 01:11:32.617 "bdev_io_pool_size": 65535, 01:11:32.617 "bdev_io_cache_size": 256, 01:11:32.617 "bdev_auto_examine": true, 01:11:32.617 "iobuf_small_cache_size": 128, 01:11:32.617 "iobuf_large_cache_size": 16 01:11:32.617 } 01:11:32.617 }, 01:11:32.617 { 01:11:32.617 "method": "bdev_raid_set_options", 01:11:32.617 "params": { 01:11:32.617 "process_window_size_kb": 1024, 01:11:32.617 "process_max_bandwidth_mb_sec": 0 01:11:32.617 } 01:11:32.617 }, 01:11:32.617 { 01:11:32.617 "method": "bdev_iscsi_set_options", 01:11:32.617 "params": { 01:11:32.617 "timeout_sec": 30 01:11:32.617 } 01:11:32.617 }, 01:11:32.617 { 01:11:32.617 "method": "bdev_nvme_set_options", 01:11:32.617 "params": { 01:11:32.617 "action_on_timeout": "none", 01:11:32.618 "timeout_us": 0, 01:11:32.618 "timeout_admin_us": 0, 01:11:32.618 "keep_alive_timeout_ms": 10000, 01:11:32.618 "arbitration_burst": 0, 01:11:32.618 "low_priority_weight": 0, 01:11:32.618 "medium_priority_weight": 0, 01:11:32.618 "high_priority_weight": 0, 01:11:32.618 "nvme_adminq_poll_period_us": 10000, 01:11:32.618 "nvme_ioq_poll_period_us": 0, 01:11:32.618 "io_queue_requests": 512, 01:11:32.618 "delay_cmd_submit": true, 01:11:32.618 "transport_retry_count": 4, 01:11:32.618 "bdev_retry_count": 3, 01:11:32.618 "transport_ack_timeout": 0, 01:11:32.618 "ctrlr_loss_timeout_sec": 0, 01:11:32.618 "reconnect_delay_sec": 0, 01:11:32.618 "fast_io_fail_timeout_sec": 0, 01:11:32.618 "disable_auto_failback": false, 01:11:32.618 "generate_uuids": false, 01:11:32.618 "transport_tos": 0, 01:11:32.618 "nvme_error_stat": false, 01:11:32.618 "rdma_srq_size": 0, 01:11:32.618 "io_path_stat": false, 01:11:32.618 "allow_accel_sequence": false, 01:11:32.618 "rdma_max_cq_size": 0, 01:11:32.618 "rdma_cm_event_timeout_ms": 0, 01:11:32.618 "dhchap_digests": [ 01:11:32.618 "sha256", 01:11:32.618 "sha384", 01:11:32.618 "sha512" 01:11:32.618 ], 01:11:32.618 "dhchap_dhgroups": [ 01:11:32.618 "null", 01:11:32.618 "ffdhe2048", 01:11:32.618 "ffdhe3072", 01:11:32.618 "ffdhe4096", 01:11:32.618 "ffdhe6144", 01:11:32.618 "ffdhe8192" 01:11:32.618 ] 01:11:32.618 } 01:11:32.618 }, 01:11:32.618 { 01:11:32.618 "method": "bdev_nvme_attach_controller", 01:11:32.618 "params": { 01:11:32.618 "name": "TLSTEST", 01:11:32.618 "trtype": "TCP", 01:11:32.618 "adrfam": "IPv4", 01:11:32.618 "traddr": "10.0.0.2", 01:11:32.618 "trsvcid": "4420", 01:11:32.618 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:11:32.618 "prchk_reftag": false, 01:11:32.618 "prchk_guard": false, 01:11:32.618 "ctrlr_loss_timeout_sec": 0, 01:11:32.618 "reconnect_delay_sec": 0, 01:11:32.618 "fast_io_fail_timeout_sec": 0, 01:11:32.618 "psk": "/tmp/tmp.srjxyziNbl", 01:11:32.618 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:11:32.618 "hdgst": false, 01:11:32.618 "ddgst": false 01:11:32.618 } 01:11:32.618 }, 01:11:32.618 { 01:11:32.618 "method": "bdev_nvme_set_hotplug", 01:11:32.618 "params": { 01:11:32.618 "period_us": 100000, 01:11:32.618 "enable": false 01:11:32.618 } 01:11:32.618 }, 01:11:32.618 { 01:11:32.618 "method": "bdev_wait_for_examine" 01:11:32.618 } 01:11:32.618 ] 01:11:32.618 }, 01:11:32.618 { 01:11:32.618 "subsystem": "nbd", 01:11:32.618 "config": [] 01:11:32.618 } 01:11:32.618 ] 01:11:32.618 }' 01:11:32.618 [2024-07-22 11:08:37.629524] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:11:32.618 [2024-07-22 11:08:37.629690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85752 ] 01:11:32.618 [2024-07-22 11:08:37.774161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:32.618 [2024-07-22 11:08:37.819602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:11:32.877 [2024-07-22 11:08:37.945626] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:11:32.877 [2024-07-22 11:08:37.972571] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:11:32.877 [2024-07-22 11:08:37.972670] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 01:11:33.442 11:08:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:11:33.442 11:08:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:11:33.442 11:08:38 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 01:11:33.442 Running I/O for 10 seconds... 01:11:43.412 01:11:43.412 Latency(us) 01:11:43.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:43.412 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:11:43.412 Verification LBA range: start 0x0 length 0x2000 01:11:43.412 TLSTESTn1 : 10.01 5753.71 22.48 0.00 0.00 22212.61 4079.55 18950.17 01:11:43.412 =================================================================================================================== 01:11:43.412 Total : 5753.71 22.48 0.00 0.00 22212.61 4079.55 18950.17 01:11:43.412 0 01:11:43.412 11:08:48 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:11:43.412 11:08:48 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 85752 01:11:43.412 11:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85752 ']' 01:11:43.412 11:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85752 01:11:43.412 11:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:11:43.412 11:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:11:43.412 11:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85752 01:11:43.412 11:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:11:43.412 11:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:11:43.412 killing process with pid 85752 01:11:43.412 11:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85752' 01:11:43.412 Received shutdown signal, test time was about 10.000000 seconds 01:11:43.412 01:11:43.412 Latency(us) 01:11:43.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:43.412 =================================================================================================================== 01:11:43.412 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:11:43.412 11:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85752 01:11:43.412 [2024-07-22 11:08:48.615963] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 01:11:43.412 11:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85752 01:11:43.671 11:08:48 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 85720 01:11:43.671 11:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85720 ']' 01:11:43.671 11:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85720 01:11:43.671 11:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:11:43.671 11:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:11:43.671 11:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85720 01:11:43.671 11:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:11:43.671 11:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:11:43.671 killing process with pid 85720 01:11:43.671 11:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85720' 01:11:43.671 11:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85720 01:11:43.671 [2024-07-22 11:08:48.827212] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 01:11:43.671 11:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85720 01:11:43.929 11:08:49 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 01:11:43.929 11:08:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:11:43.929 11:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 01:11:43.929 11:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:43.929 11:08:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 01:11:43.929 11:08:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85885 01:11:43.929 11:08:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85885 01:11:43.929 11:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85885 ']' 01:11:43.929 11:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:11:43.929 11:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:11:43.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:11:43.929 11:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:11:43.929 11:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:11:43.929 11:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:43.929 [2024-07-22 11:08:49.063144] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:11:43.929 [2024-07-22 11:08:49.063207] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:11:44.188 [2024-07-22 11:08:49.197791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:44.188 [2024-07-22 11:08:49.237306] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:11:44.188 [2024-07-22 11:08:49.237361] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:11:44.188 [2024-07-22 11:08:49.237371] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:11:44.188 [2024-07-22 11:08:49.237378] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:11:44.188 [2024-07-22 11:08:49.237386] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:11:44.188 [2024-07-22 11:08:49.237412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:11:44.188 [2024-07-22 11:08:49.279769] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:11:44.755 11:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:11:44.755 11:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:11:44.755 11:08:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:11:44.755 11:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 01:11:44.755 11:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:44.755 11:08:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:11:44.755 11:08:49 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.srjxyziNbl 01:11:44.755 11:08:49 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.srjxyziNbl 01:11:44.755 11:08:49 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:11:45.013 [2024-07-22 11:08:50.139993] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:11:45.013 11:08:50 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:11:45.272 11:08:50 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 01:11:45.531 [2024-07-22 11:08:50.507438] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:11:45.531 [2024-07-22 11:08:50.507639] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:11:45.531 11:08:50 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:11:45.531 malloc0 01:11:45.531 11:08:50 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:11:45.789 11:08:50 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.srjxyziNbl 01:11:46.047 [2024-07-22 11:08:51.073415] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 01:11:46.047 11:08:51 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=85934 01:11:46.047 11:08:51 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 01:11:46.047 11:08:51 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:11:46.047 11:08:51 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 85934 /var/tmp/bdevperf.sock 01:11:46.047 11:08:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85934 ']' 01:11:46.047 11:08:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:11:46.047 11:08:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:11:46.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:11:46.047 11:08:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:11:46.047 11:08:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:11:46.047 11:08:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:46.047 [2024-07-22 11:08:51.141108] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:11:46.047 [2024-07-22 11:08:51.141174] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85934 ] 01:11:46.307 [2024-07-22 11:08:51.269967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:46.307 [2024-07-22 11:08:51.311225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:11:46.307 [2024-07-22 11:08:51.353068] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:11:46.874 11:08:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:11:46.874 11:08:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:11:46.874 11:08:51 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.srjxyziNbl 01:11:47.132 11:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 01:11:47.132 [2024-07-22 11:08:52.328607] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:11:47.391 nvme0n1 01:11:47.391 11:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:11:47.391 Running I/O for 1 seconds... 01:11:48.764 01:11:48.764 Latency(us) 01:11:48.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:48.764 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:11:48.764 Verification LBA range: start 0x0 length 0x2000 01:11:48.764 nvme0n1 : 1.01 5757.95 22.49 0.00 0.00 22061.65 4474.35 17265.71 01:11:48.764 =================================================================================================================== 01:11:48.764 Total : 5757.95 22.49 0.00 0.00 22061.65 4474.35 17265.71 01:11:48.764 0 01:11:48.764 11:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 85934 01:11:48.764 11:08:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85934 ']' 01:11:48.764 11:08:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85934 01:11:48.764 11:08:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:11:48.764 11:08:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:11:48.764 11:08:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85934 01:11:48.764 11:08:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:11:48.764 11:08:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:11:48.764 killing process with pid 85934 01:11:48.764 11:08:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85934' 01:11:48.764 Received shutdown signal, test time was about 1.000000 seconds 01:11:48.764 01:11:48.764 Latency(us) 01:11:48.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:48.764 =================================================================================================================== 01:11:48.764 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:11:48.764 11:08:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85934 01:11:48.764 11:08:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85934 01:11:48.764 11:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 85885 01:11:48.764 11:08:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85885 ']' 01:11:48.764 11:08:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85885 01:11:48.764 11:08:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:11:48.764 11:08:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:11:48.764 11:08:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85885 01:11:48.764 11:08:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:11:48.764 killing process with pid 85885 01:11:48.764 11:08:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:11:48.764 11:08:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85885' 01:11:48.764 11:08:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85885 01:11:48.764 [2024-07-22 11:08:53.806856] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 01:11:48.764 11:08:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85885 01:11:49.023 11:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 01:11:49.023 11:08:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:11:49.023 11:08:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 01:11:49.023 11:08:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:49.023 11:08:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85985 01:11:49.023 11:08:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85985 01:11:49.023 11:08:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 01:11:49.023 11:08:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85985 ']' 01:11:49.023 11:08:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:11:49.023 11:08:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:11:49.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:11:49.023 11:08:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:11:49.023 11:08:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:11:49.023 11:08:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:49.023 [2024-07-22 11:08:54.054652] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:11:49.023 [2024-07-22 11:08:54.054729] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:11:49.023 [2024-07-22 11:08:54.187395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:49.024 [2024-07-22 11:08:54.228628] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:11:49.024 [2024-07-22 11:08:54.228681] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:11:49.024 [2024-07-22 11:08:54.228693] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:11:49.024 [2024-07-22 11:08:54.228702] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:11:49.024 [2024-07-22 11:08:54.228711] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:11:49.024 [2024-07-22 11:08:54.228739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:11:49.282 [2024-07-22 11:08:54.270244] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:11:49.849 11:08:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:11:49.849 11:08:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:11:49.849 11:08:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:11:49.849 11:08:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 01:11:49.849 11:08:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:49.849 11:08:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:11:49.849 11:08:54 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 01:11:49.849 11:08:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:49.849 11:08:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:49.849 [2024-07-22 11:08:54.959190] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:11:49.849 malloc0 01:11:49.849 [2024-07-22 11:08:54.987923] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:11:49.849 [2024-07-22 11:08:54.988101] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:11:49.849 11:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:49.849 11:08:55 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=86017 01:11:49.850 11:08:55 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 01:11:49.850 11:08:55 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 86017 /var/tmp/bdevperf.sock 01:11:49.850 11:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 86017 ']' 01:11:49.850 11:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:11:49.850 11:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:11:49.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:11:49.850 11:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:11:49.850 11:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:11:49.850 11:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:50.108 [2024-07-22 11:08:55.066172] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:11:50.108 [2024-07-22 11:08:55.066241] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86017 ] 01:11:50.108 [2024-07-22 11:08:55.207210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:50.108 [2024-07-22 11:08:55.249855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:11:50.108 [2024-07-22 11:08:55.291906] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:11:51.043 11:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:11:51.043 11:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:11:51.043 11:08:55 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.srjxyziNbl 01:11:51.043 11:08:56 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 01:11:51.300 [2024-07-22 11:08:56.251486] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:11:51.301 nvme0n1 01:11:51.301 11:08:56 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:11:51.301 Running I/O for 1 seconds... 01:11:52.245 01:11:52.245 Latency(us) 01:11:52.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:52.245 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:11:52.245 Verification LBA range: start 0x0 length 0x2000 01:11:52.245 nvme0n1 : 1.01 5736.53 22.41 0.00 0.00 22156.99 4526.98 17581.55 01:11:52.245 =================================================================================================================== 01:11:52.246 Total : 5736.53 22.41 0.00 0.00 22156.99 4526.98 17581.55 01:11:52.246 0 01:11:52.505 11:08:57 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 01:11:52.505 11:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:52.505 11:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:52.505 11:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:52.505 11:08:57 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 01:11:52.505 "subsystems": [ 01:11:52.505 { 01:11:52.505 "subsystem": "keyring", 01:11:52.505 "config": [ 01:11:52.505 { 01:11:52.505 "method": "keyring_file_add_key", 01:11:52.505 "params": { 01:11:52.505 "name": "key0", 01:11:52.505 "path": "/tmp/tmp.srjxyziNbl" 01:11:52.505 } 01:11:52.505 } 01:11:52.505 ] 01:11:52.505 }, 01:11:52.505 { 01:11:52.505 "subsystem": "iobuf", 01:11:52.505 "config": [ 01:11:52.505 { 01:11:52.505 "method": "iobuf_set_options", 01:11:52.505 "params": { 01:11:52.505 "small_pool_count": 8192, 01:11:52.505 "large_pool_count": 1024, 01:11:52.505 "small_bufsize": 8192, 01:11:52.505 "large_bufsize": 135168 01:11:52.505 } 01:11:52.505 } 01:11:52.505 ] 01:11:52.505 }, 01:11:52.505 { 01:11:52.505 "subsystem": "sock", 01:11:52.505 "config": [ 01:11:52.505 { 01:11:52.505 "method": "sock_set_default_impl", 01:11:52.505 "params": { 01:11:52.505 "impl_name": "uring" 01:11:52.505 } 01:11:52.505 }, 01:11:52.505 { 01:11:52.505 "method": "sock_impl_set_options", 01:11:52.505 "params": { 01:11:52.505 "impl_name": "ssl", 01:11:52.505 "recv_buf_size": 4096, 01:11:52.505 "send_buf_size": 4096, 01:11:52.505 "enable_recv_pipe": true, 01:11:52.505 "enable_quickack": false, 01:11:52.505 "enable_placement_id": 0, 01:11:52.505 "enable_zerocopy_send_server": true, 01:11:52.505 "enable_zerocopy_send_client": false, 01:11:52.505 "zerocopy_threshold": 0, 01:11:52.505 "tls_version": 0, 01:11:52.505 "enable_ktls": false 01:11:52.505 } 01:11:52.505 }, 01:11:52.505 { 01:11:52.505 "method": "sock_impl_set_options", 01:11:52.505 "params": { 01:11:52.505 "impl_name": "posix", 01:11:52.505 "recv_buf_size": 2097152, 01:11:52.505 "send_buf_size": 2097152, 01:11:52.505 "enable_recv_pipe": true, 01:11:52.505 "enable_quickack": false, 01:11:52.505 "enable_placement_id": 0, 01:11:52.505 "enable_zerocopy_send_server": true, 01:11:52.505 "enable_zerocopy_send_client": false, 01:11:52.505 "zerocopy_threshold": 0, 01:11:52.505 "tls_version": 0, 01:11:52.505 "enable_ktls": false 01:11:52.505 } 01:11:52.505 }, 01:11:52.505 { 01:11:52.505 "method": "sock_impl_set_options", 01:11:52.505 "params": { 01:11:52.505 "impl_name": "uring", 01:11:52.505 "recv_buf_size": 2097152, 01:11:52.505 "send_buf_size": 2097152, 01:11:52.505 "enable_recv_pipe": true, 01:11:52.505 "enable_quickack": false, 01:11:52.505 "enable_placement_id": 0, 01:11:52.505 "enable_zerocopy_send_server": false, 01:11:52.505 "enable_zerocopy_send_client": false, 01:11:52.505 "zerocopy_threshold": 0, 01:11:52.505 "tls_version": 0, 01:11:52.505 "enable_ktls": false 01:11:52.505 } 01:11:52.505 } 01:11:52.505 ] 01:11:52.505 }, 01:11:52.505 { 01:11:52.505 "subsystem": "vmd", 01:11:52.505 "config": [] 01:11:52.505 }, 01:11:52.505 { 01:11:52.505 "subsystem": "accel", 01:11:52.505 "config": [ 01:11:52.505 { 01:11:52.505 "method": "accel_set_options", 01:11:52.505 "params": { 01:11:52.505 "small_cache_size": 128, 01:11:52.505 "large_cache_size": 16, 01:11:52.505 "task_count": 2048, 01:11:52.505 "sequence_count": 2048, 01:11:52.505 "buf_count": 2048 01:11:52.505 } 01:11:52.505 } 01:11:52.505 ] 01:11:52.505 }, 01:11:52.505 { 01:11:52.505 "subsystem": "bdev", 01:11:52.505 "config": [ 01:11:52.505 { 01:11:52.505 "method": "bdev_set_options", 01:11:52.505 "params": { 01:11:52.505 "bdev_io_pool_size": 65535, 01:11:52.505 "bdev_io_cache_size": 256, 01:11:52.505 "bdev_auto_examine": true, 01:11:52.505 "iobuf_small_cache_size": 128, 01:11:52.505 "iobuf_large_cache_size": 16 01:11:52.505 } 01:11:52.505 }, 01:11:52.505 { 01:11:52.505 "method": "bdev_raid_set_options", 01:11:52.505 "params": { 01:11:52.505 "process_window_size_kb": 1024, 01:11:52.505 "process_max_bandwidth_mb_sec": 0 01:11:52.505 } 01:11:52.505 }, 01:11:52.505 { 01:11:52.505 "method": "bdev_iscsi_set_options", 01:11:52.505 "params": { 01:11:52.505 "timeout_sec": 30 01:11:52.505 } 01:11:52.505 }, 01:11:52.505 { 01:11:52.505 "method": "bdev_nvme_set_options", 01:11:52.505 "params": { 01:11:52.505 "action_on_timeout": "none", 01:11:52.505 "timeout_us": 0, 01:11:52.505 "timeout_admin_us": 0, 01:11:52.505 "keep_alive_timeout_ms": 10000, 01:11:52.505 "arbitration_burst": 0, 01:11:52.505 "low_priority_weight": 0, 01:11:52.505 "medium_priority_weight": 0, 01:11:52.505 "high_priority_weight": 0, 01:11:52.505 "nvme_adminq_poll_period_us": 10000, 01:11:52.505 "nvme_ioq_poll_period_us": 0, 01:11:52.505 "io_queue_requests": 0, 01:11:52.505 "delay_cmd_submit": true, 01:11:52.505 "transport_retry_count": 4, 01:11:52.505 "bdev_retry_count": 3, 01:11:52.505 "transport_ack_timeout": 0, 01:11:52.505 "ctrlr_loss_timeout_sec": 0, 01:11:52.505 "reconnect_delay_sec": 0, 01:11:52.505 "fast_io_fail_timeout_sec": 0, 01:11:52.505 "disable_auto_failback": false, 01:11:52.505 "generate_uuids": false, 01:11:52.505 "transport_tos": 0, 01:11:52.505 "nvme_error_stat": false, 01:11:52.505 "rdma_srq_size": 0, 01:11:52.505 "io_path_stat": false, 01:11:52.505 "allow_accel_sequence": false, 01:11:52.505 "rdma_max_cq_size": 0, 01:11:52.505 "rdma_cm_event_timeout_ms": 0, 01:11:52.505 "dhchap_digests": [ 01:11:52.505 "sha256", 01:11:52.505 "sha384", 01:11:52.505 "sha512" 01:11:52.505 ], 01:11:52.505 "dhchap_dhgroups": [ 01:11:52.505 "null", 01:11:52.505 "ffdhe2048", 01:11:52.505 "ffdhe3072", 01:11:52.505 "ffdhe4096", 01:11:52.505 "ffdhe6144", 01:11:52.505 "ffdhe8192" 01:11:52.505 ] 01:11:52.505 } 01:11:52.505 }, 01:11:52.505 { 01:11:52.505 "method": "bdev_nvme_set_hotplug", 01:11:52.505 "params": { 01:11:52.505 "period_us": 100000, 01:11:52.505 "enable": false 01:11:52.505 } 01:11:52.505 }, 01:11:52.505 { 01:11:52.505 "method": "bdev_malloc_create", 01:11:52.505 "params": { 01:11:52.505 "name": "malloc0", 01:11:52.505 "num_blocks": 8192, 01:11:52.505 "block_size": 4096, 01:11:52.505 "physical_block_size": 4096, 01:11:52.505 "uuid": "320bfc02-40f3-4e21-9f07-ebf0ec299b9f", 01:11:52.505 "optimal_io_boundary": 0 01:11:52.505 } 01:11:52.505 }, 01:11:52.505 { 01:11:52.505 "method": "bdev_wait_for_examine" 01:11:52.505 } 01:11:52.505 ] 01:11:52.505 }, 01:11:52.505 { 01:11:52.505 "subsystem": "nbd", 01:11:52.505 "config": [] 01:11:52.505 }, 01:11:52.505 { 01:11:52.505 "subsystem": "scheduler", 01:11:52.505 "config": [ 01:11:52.505 { 01:11:52.505 "method": "framework_set_scheduler", 01:11:52.505 "params": { 01:11:52.505 "name": "static" 01:11:52.505 } 01:11:52.505 } 01:11:52.505 ] 01:11:52.505 }, 01:11:52.505 { 01:11:52.505 "subsystem": "nvmf", 01:11:52.505 "config": [ 01:11:52.505 { 01:11:52.505 "method": "nvmf_set_config", 01:11:52.505 "params": { 01:11:52.505 "discovery_filter": "match_any", 01:11:52.505 "admin_cmd_passthru": { 01:11:52.505 "identify_ctrlr": false 01:11:52.505 } 01:11:52.505 } 01:11:52.505 }, 01:11:52.505 { 01:11:52.505 "method": "nvmf_set_max_subsystems", 01:11:52.505 "params": { 01:11:52.505 "max_subsystems": 1024 01:11:52.505 } 01:11:52.505 }, 01:11:52.505 { 01:11:52.505 "method": "nvmf_set_crdt", 01:11:52.505 "params": { 01:11:52.505 "crdt1": 0, 01:11:52.505 "crdt2": 0, 01:11:52.505 "crdt3": 0 01:11:52.505 } 01:11:52.505 }, 01:11:52.505 { 01:11:52.505 "method": "nvmf_create_transport", 01:11:52.505 "params": { 01:11:52.505 "trtype": "TCP", 01:11:52.505 "max_queue_depth": 128, 01:11:52.505 "max_io_qpairs_per_ctrlr": 127, 01:11:52.505 "in_capsule_data_size": 4096, 01:11:52.505 "max_io_size": 131072, 01:11:52.505 "io_unit_size": 131072, 01:11:52.505 "max_aq_depth": 128, 01:11:52.505 "num_shared_buffers": 511, 01:11:52.505 "buf_cache_size": 4294967295, 01:11:52.505 "dif_insert_or_strip": false, 01:11:52.505 "zcopy": false, 01:11:52.505 "c2h_success": false, 01:11:52.505 "sock_priority": 0, 01:11:52.505 "abort_timeout_sec": 1, 01:11:52.505 "ack_timeout": 0, 01:11:52.505 "data_wr_pool_size": 0 01:11:52.505 } 01:11:52.505 }, 01:11:52.505 { 01:11:52.505 "method": "nvmf_create_subsystem", 01:11:52.505 "params": { 01:11:52.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:11:52.505 "allow_any_host": false, 01:11:52.505 "serial_number": "00000000000000000000", 01:11:52.505 "model_number": "SPDK bdev Controller", 01:11:52.505 "max_namespaces": 32, 01:11:52.505 "min_cntlid": 1, 01:11:52.505 "max_cntlid": 65519, 01:11:52.505 "ana_reporting": false 01:11:52.505 } 01:11:52.505 }, 01:11:52.505 { 01:11:52.505 "method": "nvmf_subsystem_add_host", 01:11:52.505 "params": { 01:11:52.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:11:52.505 "host": "nqn.2016-06.io.spdk:host1", 01:11:52.505 "psk": "key0" 01:11:52.505 } 01:11:52.505 }, 01:11:52.505 { 01:11:52.505 "method": "nvmf_subsystem_add_ns", 01:11:52.505 "params": { 01:11:52.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:11:52.505 "namespace": { 01:11:52.505 "nsid": 1, 01:11:52.505 "bdev_name": "malloc0", 01:11:52.505 "nguid": "320BFC0240F34E219F07EBF0EC299B9F", 01:11:52.505 "uuid": "320bfc02-40f3-4e21-9f07-ebf0ec299b9f", 01:11:52.505 "no_auto_visible": false 01:11:52.505 } 01:11:52.505 } 01:11:52.505 }, 01:11:52.505 { 01:11:52.505 "method": "nvmf_subsystem_add_listener", 01:11:52.505 "params": { 01:11:52.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:11:52.505 "listen_address": { 01:11:52.505 "trtype": "TCP", 01:11:52.505 "adrfam": "IPv4", 01:11:52.505 "traddr": "10.0.0.2", 01:11:52.505 "trsvcid": "4420" 01:11:52.505 }, 01:11:52.505 "secure_channel": false, 01:11:52.505 "sock_impl": "ssl" 01:11:52.505 } 01:11:52.505 } 01:11:52.505 ] 01:11:52.505 } 01:11:52.505 ] 01:11:52.505 }' 01:11:52.505 11:08:57 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 01:11:52.765 11:08:57 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 01:11:52.765 "subsystems": [ 01:11:52.765 { 01:11:52.765 "subsystem": "keyring", 01:11:52.765 "config": [ 01:11:52.765 { 01:11:52.765 "method": "keyring_file_add_key", 01:11:52.765 "params": { 01:11:52.765 "name": "key0", 01:11:52.765 "path": "/tmp/tmp.srjxyziNbl" 01:11:52.765 } 01:11:52.765 } 01:11:52.765 ] 01:11:52.765 }, 01:11:52.765 { 01:11:52.765 "subsystem": "iobuf", 01:11:52.765 "config": [ 01:11:52.765 { 01:11:52.765 "method": "iobuf_set_options", 01:11:52.765 "params": { 01:11:52.765 "small_pool_count": 8192, 01:11:52.765 "large_pool_count": 1024, 01:11:52.765 "small_bufsize": 8192, 01:11:52.765 "large_bufsize": 135168 01:11:52.765 } 01:11:52.765 } 01:11:52.765 ] 01:11:52.765 }, 01:11:52.765 { 01:11:52.765 "subsystem": "sock", 01:11:52.765 "config": [ 01:11:52.765 { 01:11:52.765 "method": "sock_set_default_impl", 01:11:52.765 "params": { 01:11:52.765 "impl_name": "uring" 01:11:52.765 } 01:11:52.765 }, 01:11:52.765 { 01:11:52.765 "method": "sock_impl_set_options", 01:11:52.765 "params": { 01:11:52.765 "impl_name": "ssl", 01:11:52.765 "recv_buf_size": 4096, 01:11:52.765 "send_buf_size": 4096, 01:11:52.765 "enable_recv_pipe": true, 01:11:52.765 "enable_quickack": false, 01:11:52.765 "enable_placement_id": 0, 01:11:52.765 "enable_zerocopy_send_server": true, 01:11:52.765 "enable_zerocopy_send_client": false, 01:11:52.765 "zerocopy_threshold": 0, 01:11:52.765 "tls_version": 0, 01:11:52.765 "enable_ktls": false 01:11:52.765 } 01:11:52.765 }, 01:11:52.765 { 01:11:52.765 "method": "sock_impl_set_options", 01:11:52.765 "params": { 01:11:52.765 "impl_name": "posix", 01:11:52.765 "recv_buf_size": 2097152, 01:11:52.765 "send_buf_size": 2097152, 01:11:52.765 "enable_recv_pipe": true, 01:11:52.765 "enable_quickack": false, 01:11:52.765 "enable_placement_id": 0, 01:11:52.765 "enable_zerocopy_send_server": true, 01:11:52.765 "enable_zerocopy_send_client": false, 01:11:52.765 "zerocopy_threshold": 0, 01:11:52.765 "tls_version": 0, 01:11:52.765 "enable_ktls": false 01:11:52.765 } 01:11:52.765 }, 01:11:52.765 { 01:11:52.765 "method": "sock_impl_set_options", 01:11:52.765 "params": { 01:11:52.765 "impl_name": "uring", 01:11:52.765 "recv_buf_size": 2097152, 01:11:52.765 "send_buf_size": 2097152, 01:11:52.765 "enable_recv_pipe": true, 01:11:52.765 "enable_quickack": false, 01:11:52.765 "enable_placement_id": 0, 01:11:52.765 "enable_zerocopy_send_server": false, 01:11:52.765 "enable_zerocopy_send_client": false, 01:11:52.765 "zerocopy_threshold": 0, 01:11:52.765 "tls_version": 0, 01:11:52.765 "enable_ktls": false 01:11:52.765 } 01:11:52.765 } 01:11:52.765 ] 01:11:52.765 }, 01:11:52.765 { 01:11:52.765 "subsystem": "vmd", 01:11:52.765 "config": [] 01:11:52.765 }, 01:11:52.765 { 01:11:52.765 "subsystem": "accel", 01:11:52.765 "config": [ 01:11:52.765 { 01:11:52.765 "method": "accel_set_options", 01:11:52.765 "params": { 01:11:52.765 "small_cache_size": 128, 01:11:52.765 "large_cache_size": 16, 01:11:52.765 "task_count": 2048, 01:11:52.765 "sequence_count": 2048, 01:11:52.765 "buf_count": 2048 01:11:52.765 } 01:11:52.765 } 01:11:52.765 ] 01:11:52.765 }, 01:11:52.765 { 01:11:52.765 "subsystem": "bdev", 01:11:52.765 "config": [ 01:11:52.765 { 01:11:52.765 "method": "bdev_set_options", 01:11:52.765 "params": { 01:11:52.765 "bdev_io_pool_size": 65535, 01:11:52.765 "bdev_io_cache_size": 256, 01:11:52.765 "bdev_auto_examine": true, 01:11:52.765 "iobuf_small_cache_size": 128, 01:11:52.765 "iobuf_large_cache_size": 16 01:11:52.765 } 01:11:52.765 }, 01:11:52.765 { 01:11:52.765 "method": "bdev_raid_set_options", 01:11:52.765 "params": { 01:11:52.765 "process_window_size_kb": 1024, 01:11:52.765 "process_max_bandwidth_mb_sec": 0 01:11:52.765 } 01:11:52.765 }, 01:11:52.765 { 01:11:52.765 "method": "bdev_iscsi_set_options", 01:11:52.765 "params": { 01:11:52.765 "timeout_sec": 30 01:11:52.765 } 01:11:52.765 }, 01:11:52.765 { 01:11:52.765 "method": "bdev_nvme_set_options", 01:11:52.765 "params": { 01:11:52.765 "action_on_timeout": "none", 01:11:52.765 "timeout_us": 0, 01:11:52.765 "timeout_admin_us": 0, 01:11:52.765 "keep_alive_timeout_ms": 10000, 01:11:52.765 "arbitration_burst": 0, 01:11:52.765 "low_priority_weight": 0, 01:11:52.765 "medium_priority_weight": 0, 01:11:52.765 "high_priority_weight": 0, 01:11:52.765 "nvme_adminq_poll_period_us": 10000, 01:11:52.765 "nvme_ioq_poll_period_us": 0, 01:11:52.765 "io_queue_requests": 512, 01:11:52.765 "delay_cmd_submit": true, 01:11:52.765 "transport_retry_count": 4, 01:11:52.765 "bdev_retry_count": 3, 01:11:52.765 "transport_ack_timeout": 0, 01:11:52.765 "ctrlr_loss_timeout_sec": 0, 01:11:52.765 "reconnect_delay_sec": 0, 01:11:52.765 "fast_io_fail_timeout_sec": 0, 01:11:52.765 "disable_auto_failback": false, 01:11:52.765 "generate_uuids": false, 01:11:52.765 "transport_tos": 0, 01:11:52.765 "nvme_error_stat": false, 01:11:52.765 "rdma_srq_size": 0, 01:11:52.765 "io_path_stat": false, 01:11:52.765 "allow_accel_sequence": false, 01:11:52.765 "rdma_max_cq_size": 0, 01:11:52.765 "rdma_cm_event_timeout_ms": 0, 01:11:52.765 "dhchap_digests": [ 01:11:52.765 "sha256", 01:11:52.765 "sha384", 01:11:52.765 "sha512" 01:11:52.765 ], 01:11:52.765 "dhchap_dhgroups": [ 01:11:52.765 "null", 01:11:52.765 "ffdhe2048", 01:11:52.765 "ffdhe3072", 01:11:52.765 "ffdhe4096", 01:11:52.765 "ffdhe6144", 01:11:52.765 "ffdhe8192" 01:11:52.765 ] 01:11:52.765 } 01:11:52.765 }, 01:11:52.765 { 01:11:52.765 "method": "bdev_nvme_attach_controller", 01:11:52.765 "params": { 01:11:52.765 "name": "nvme0", 01:11:52.765 "trtype": "TCP", 01:11:52.765 "adrfam": "IPv4", 01:11:52.765 "traddr": "10.0.0.2", 01:11:52.765 "trsvcid": "4420", 01:11:52.765 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:11:52.765 "prchk_reftag": false, 01:11:52.765 "prchk_guard": false, 01:11:52.765 "ctrlr_loss_timeout_sec": 0, 01:11:52.765 "reconnect_delay_sec": 0, 01:11:52.765 "fast_io_fail_timeout_sec": 0, 01:11:52.765 "psk": "key0", 01:11:52.765 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:11:52.765 "hdgst": false, 01:11:52.765 "ddgst": false 01:11:52.765 } 01:11:52.765 }, 01:11:52.765 { 01:11:52.765 "method": "bdev_nvme_set_hotplug", 01:11:52.765 "params": { 01:11:52.765 "period_us": 100000, 01:11:52.765 "enable": false 01:11:52.765 } 01:11:52.765 }, 01:11:52.765 { 01:11:52.765 "method": "bdev_enable_histogram", 01:11:52.765 "params": { 01:11:52.765 "name": "nvme0n1", 01:11:52.765 "enable": true 01:11:52.765 } 01:11:52.765 }, 01:11:52.765 { 01:11:52.765 "method": "bdev_wait_for_examine" 01:11:52.765 } 01:11:52.765 ] 01:11:52.765 }, 01:11:52.765 { 01:11:52.765 "subsystem": "nbd", 01:11:52.765 "config": [] 01:11:52.765 } 01:11:52.765 ] 01:11:52.765 }' 01:11:52.765 11:08:57 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 86017 01:11:52.765 11:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 86017 ']' 01:11:52.765 11:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 86017 01:11:52.765 11:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:11:52.766 11:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:11:52.766 11:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86017 01:11:52.766 11:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:11:52.766 11:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:11:52.766 killing process with pid 86017 01:11:52.766 11:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86017' 01:11:52.766 11:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 86017 01:11:52.766 Received shutdown signal, test time was about 1.000000 seconds 01:11:52.766 01:11:52.766 Latency(us) 01:11:52.766 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:52.766 =================================================================================================================== 01:11:52.766 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:11:52.766 11:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 86017 01:11:53.023 11:08:58 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 85985 01:11:53.023 11:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85985 ']' 01:11:53.023 11:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85985 01:11:53.023 11:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:11:53.023 11:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:11:53.023 11:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85985 01:11:53.023 11:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:11:53.023 11:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:11:53.023 killing process with pid 85985 01:11:53.023 11:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85985' 01:11:53.023 11:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85985 01:11:53.023 11:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85985 01:11:53.281 11:08:58 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 01:11:53.281 "subsystems": [ 01:11:53.281 { 01:11:53.281 "subsystem": "keyring", 01:11:53.281 "config": [ 01:11:53.281 { 01:11:53.281 "method": "keyring_file_add_key", 01:11:53.281 "params": { 01:11:53.281 "name": "key0", 01:11:53.281 "path": "/tmp/tmp.srjxyziNbl" 01:11:53.281 } 01:11:53.281 } 01:11:53.281 ] 01:11:53.281 }, 01:11:53.281 { 01:11:53.281 "subsystem": "iobuf", 01:11:53.281 "config": [ 01:11:53.281 { 01:11:53.281 "method": "iobuf_set_options", 01:11:53.281 "params": { 01:11:53.281 "small_pool_count": 8192, 01:11:53.281 "large_pool_count": 1024, 01:11:53.281 "small_bufsize": 8192, 01:11:53.281 "large_bufsize": 135168 01:11:53.281 } 01:11:53.281 } 01:11:53.281 ] 01:11:53.281 }, 01:11:53.281 { 01:11:53.281 "subsystem": "sock", 01:11:53.281 "config": [ 01:11:53.281 { 01:11:53.281 "method": "sock_set_default_impl", 01:11:53.281 "params": { 01:11:53.281 "impl_name": "uring" 01:11:53.281 } 01:11:53.281 }, 01:11:53.281 { 01:11:53.281 "method": "sock_impl_set_options", 01:11:53.281 "params": { 01:11:53.281 "impl_name": "ssl", 01:11:53.281 "recv_buf_size": 4096, 01:11:53.281 "send_buf_size": 4096, 01:11:53.281 "enable_recv_pipe": true, 01:11:53.281 "enable_quickack": false, 01:11:53.281 "enable_placement_id": 0, 01:11:53.281 "enable_zerocopy_send_server": true, 01:11:53.281 "enable_zerocopy_send_client": false, 01:11:53.281 "zerocopy_threshold": 0, 01:11:53.281 "tls_version": 0, 01:11:53.281 "enable_ktls": false 01:11:53.281 } 01:11:53.281 }, 01:11:53.281 { 01:11:53.281 "method": "sock_impl_set_options", 01:11:53.281 "params": { 01:11:53.281 "impl_name": "posix", 01:11:53.281 "recv_buf_size": 2097152, 01:11:53.281 "send_buf_size": 2097152, 01:11:53.281 "enable_recv_pipe": true, 01:11:53.281 "enable_quickack": false, 01:11:53.281 "enable_placement_id": 0, 01:11:53.281 "enable_zerocopy_send_server": true, 01:11:53.281 "enable_zerocopy_send_client": false, 01:11:53.281 "zerocopy_threshold": 0, 01:11:53.281 "tls_version": 0, 01:11:53.281 "enable_ktls": false 01:11:53.281 } 01:11:53.281 }, 01:11:53.281 { 01:11:53.281 "method": "sock_impl_set_options", 01:11:53.281 "params": { 01:11:53.281 "impl_name": "uring", 01:11:53.281 "recv_buf_size": 2097152, 01:11:53.281 "send_buf_size": 2097152, 01:11:53.281 "enable_recv_pipe": true, 01:11:53.281 "enable_quickack": false, 01:11:53.281 "enable_placement_id": 0, 01:11:53.281 "enable_zerocopy_send_server": false, 01:11:53.281 "enable_zerocopy_send_client": false, 01:11:53.281 "zerocopy_threshold": 0, 01:11:53.281 "tls_version": 0, 01:11:53.281 "enable_ktls": false 01:11:53.281 } 01:11:53.281 } 01:11:53.281 ] 01:11:53.281 }, 01:11:53.281 { 01:11:53.281 "subsystem": "vmd", 01:11:53.281 "config": [] 01:11:53.281 }, 01:11:53.281 { 01:11:53.281 "subsystem": "accel", 01:11:53.281 "config": [ 01:11:53.281 { 01:11:53.281 "method": "accel_set_options", 01:11:53.281 "params": { 01:11:53.281 "small_cache_size": 128, 01:11:53.281 "large_cache_size": 16, 01:11:53.282 "task_count": 2048, 01:11:53.282 "sequence_count": 2048, 01:11:53.282 "buf_count": 2048 01:11:53.282 } 01:11:53.282 } 01:11:53.282 ] 01:11:53.282 }, 01:11:53.282 { 01:11:53.282 "subsystem": "bdev", 01:11:53.282 "config": [ 01:11:53.282 { 01:11:53.282 "method": "bdev_set_options", 01:11:53.282 "params": { 01:11:53.282 "bdev_io_pool_size": 65535, 01:11:53.282 "bdev_io_cache_size": 256, 01:11:53.282 "bdev_auto_examine": true, 01:11:53.282 "iobuf_small_cache_size": 128, 01:11:53.282 "iobuf_large_cache_size": 16 01:11:53.282 } 01:11:53.282 }, 01:11:53.282 { 01:11:53.282 "method": "bdev_raid_set_options", 01:11:53.282 "params": { 01:11:53.282 "process_window_size_kb": 1024, 01:11:53.282 "process_max_bandwidth_mb_sec": 0 01:11:53.282 } 01:11:53.282 }, 01:11:53.282 { 01:11:53.282 "method": "bdev_iscsi_set_options", 01:11:53.282 "params": { 01:11:53.282 "timeout_sec": 30 01:11:53.282 } 01:11:53.282 }, 01:11:53.282 { 01:11:53.282 "method": "bdev_nvme_set_options", 01:11:53.282 "params": { 01:11:53.282 "action_on_timeout": "none", 01:11:53.282 "timeout_us": 0, 01:11:53.282 "timeout_admin_us": 0, 01:11:53.282 "keep_alive_timeout_ms": 10000, 01:11:53.282 "arbitration_burst": 0, 01:11:53.282 "low_priority_weight": 0, 01:11:53.282 "medium_priority_weight": 0, 01:11:53.282 "high_priority_weight": 0, 01:11:53.282 "nvme_adminq_poll_period_us": 10000, 01:11:53.282 "nvme_ioq_poll_period_us": 0, 01:11:53.282 "io_queue_requests": 0, 01:11:53.282 "delay_cmd_submit": true, 01:11:53.282 "transport_retry_count": 4, 01:11:53.282 "bdev_retry_count": 3, 01:11:53.282 "transport_ack_timeout": 0, 01:11:53.282 "ctrlr_loss_timeout_sec": 0, 01:11:53.282 "reconnect_delay_sec": 0, 01:11:53.282 "fast_io_fail_timeout_sec": 0, 01:11:53.282 "disable_auto_failback": false, 01:11:53.282 "generate_uuids": false, 01:11:53.282 "transport_tos": 0, 01:11:53.282 "nvme_error_stat": false, 01:11:53.282 "rdma_srq_size": 0, 01:11:53.282 "io_path_stat": false, 01:11:53.282 "allow_accel_sequence": false, 01:11:53.282 "rdma_max_cq_size": 0, 01:11:53.282 "rdma_cm_event_timeout_ms": 0, 01:11:53.282 "dhchap_digests": [ 01:11:53.282 "sha256", 01:11:53.282 "sha384", 01:11:53.282 "sha512" 01:11:53.282 ], 01:11:53.282 "dhchap_dhgroups": [ 01:11:53.282 "null", 01:11:53.282 "ffdhe2048", 01:11:53.282 "ffdhe3072", 01:11:53.282 "ffdhe4096", 01:11:53.282 "ffdhe6144", 01:11:53.282 "ffdhe8192" 01:11:53.282 ] 01:11:53.282 } 01:11:53.282 }, 01:11:53.282 { 01:11:53.282 "method": "bdev_nvme_set_hotplug", 01:11:53.282 "params": { 01:11:53.282 "period_us": 100000, 01:11:53.282 "enable": false 01:11:53.282 } 01:11:53.282 }, 01:11:53.282 { 01:11:53.282 "method": "bdev_malloc_create", 01:11:53.282 "params": { 01:11:53.282 "name": "malloc0", 01:11:53.282 "num_blocks": 8192, 01:11:53.282 "block_size": 4096, 01:11:53.282 "physical_block_size": 4096, 01:11:53.282 "uuid": "320bfc02-40f3-4e21-9f07-ebf0ec299b9f", 01:11:53.282 "optimal_io_boundary": 0 01:11:53.282 } 01:11:53.282 }, 01:11:53.282 { 01:11:53.282 "method": "bdev_wait_for_examine" 01:11:53.282 } 01:11:53.282 ] 01:11:53.282 }, 01:11:53.282 { 01:11:53.282 "subsystem": "nbd", 01:11:53.282 "config": [] 01:11:53.282 }, 01:11:53.282 { 01:11:53.282 "subsystem": "scheduler", 01:11:53.282 "config": [ 01:11:53.282 { 01:11:53.282 "method": "framework_set_scheduler", 01:11:53.282 "params": { 01:11:53.282 "name": "static" 01:11:53.282 } 01:11:53.282 } 01:11:53.282 ] 01:11:53.282 }, 01:11:53.282 { 01:11:53.282 "subsystem": "nvmf", 01:11:53.282 "config": [ 01:11:53.282 { 01:11:53.282 "method": "nvmf_set_config", 01:11:53.282 "params": { 01:11:53.282 "discovery_filter": "match_any", 01:11:53.282 "admin_cmd_passthru": { 01:11:53.282 "identify_ctrlr": false 01:11:53.282 } 01:11:53.282 } 01:11:53.282 }, 01:11:53.282 { 01:11:53.282 "method": "nvmf_set_max_subsystems", 01:11:53.282 "params": { 01:11:53.282 "max_subsystems": 1024 01:11:53.282 } 01:11:53.282 }, 01:11:53.282 { 01:11:53.282 "method": "nvmf_set_crdt", 01:11:53.282 "params": { 01:11:53.282 "crdt1": 0, 01:11:53.282 "crdt2": 0, 01:11:53.282 "crdt3": 0 01:11:53.282 } 01:11:53.282 }, 01:11:53.282 { 01:11:53.282 "method": "nvmf_create_transport", 01:11:53.282 "params": { 01:11:53.282 "trtype": "TCP", 01:11:53.282 "max_queue_depth": 128, 01:11:53.282 "max_io_qpairs_per_ctrlr": 127, 01:11:53.282 "in_capsule_data_size": 4096, 01:11:53.282 "max_io_size": 131072, 01:11:53.282 "io_unit_size": 131072, 01:11:53.282 "max_aq_depth": 128, 01:11:53.282 "num_shared_buffers": 511, 01:11:53.282 "buf_cache_size": 4294967295, 01:11:53.282 "dif_insert_or_strip": false, 01:11:53.282 "zcopy": false, 01:11:53.282 "c2h_success": false, 01:11:53.282 "sock_priority": 0, 01:11:53.282 "abort_timeout_sec": 1, 01:11:53.282 "ack_timeout": 0, 01:11:53.282 "data_wr_pool_size": 0 01:11:53.282 } 01:11:53.282 }, 01:11:53.282 { 01:11:53.282 "method": "nvmf_create_subsystem", 01:11:53.282 "params": { 01:11:53.282 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:11:53.282 "allow_any_host": false, 01:11:53.282 "serial_number": "00000000000000000000", 01:11:53.282 "model_number": "SPDK bdev Controller", 01:11:53.282 "max_namespaces": 32, 01:11:53.282 "min_cntlid": 1, 01:11:53.282 "max_cntlid": 65519, 01:11:53.282 "ana_reporting": false 01:11:53.282 } 01:11:53.282 }, 01:11:53.282 { 01:11:53.282 "method": "nvmf_subsystem_add_host", 01:11:53.282 "params": { 01:11:53.282 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:11:53.282 "host": "nqn.2016-06.io.spdk:host1", 01:11:53.282 "psk": "key0" 01:11:53.282 } 01:11:53.282 }, 01:11:53.282 { 01:11:53.282 "method": "nvmf_subsystem_add_ns", 01:11:53.282 "params": { 01:11:53.282 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:11:53.282 "namespace": { 01:11:53.282 "nsid": 1, 01:11:53.282 "bdev_name": "malloc0", 01:11:53.282 "nguid": "320BFC0240F34E219F07EBF0EC299B9F", 01:11:53.282 "uuid": "320bfc02-40f3-4e21-9f07-ebf0ec299b9f", 01:11:53.282 "no_auto_visible": false 01:11:53.282 } 01:11:53.282 } 01:11:53.282 }, 01:11:53.282 { 01:11:53.282 "method": "nvmf_subsystem_add_listener", 01:11:53.282 "params": { 01:11:53.282 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:11:53.282 "listen_address": { 01:11:53.282 "trtype": "TCP", 01:11:53.282 "adrfam": "IPv4", 01:11:53.282 "traddr": "10.0.0.2", 01:11:53.282 "trsvcid": "4420" 01:11:53.282 }, 01:11:53.282 "secure_channel": false, 01:11:53.282 "sock_impl": "ssl" 01:11:53.282 } 01:11:53.282 } 01:11:53.282 ] 01:11:53.282 } 01:11:53.282 ] 01:11:53.282 }' 01:11:53.282 11:08:58 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 01:11:53.282 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:11:53.282 11:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 01:11:53.282 11:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:53.282 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=86071 01:11:53.282 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 86071 01:11:53.282 11:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 86071 ']' 01:11:53.282 11:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:11:53.282 11:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:11:53.282 11:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:11:53.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:11:53.282 11:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:11:53.282 11:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:53.282 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 01:11:53.282 [2024-07-22 11:08:58.356041] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:11:53.282 [2024-07-22 11:08:58.356110] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:11:53.282 [2024-07-22 11:08:58.485581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:53.540 [2024-07-22 11:08:58.531591] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:11:53.540 [2024-07-22 11:08:58.531637] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:11:53.540 [2024-07-22 11:08:58.531647] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:11:53.540 [2024-07-22 11:08:58.531655] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:11:53.540 [2024-07-22 11:08:58.531662] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:11:53.540 [2024-07-22 11:08:58.531734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:11:53.540 [2024-07-22 11:08:58.686548] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:11:53.798 [2024-07-22 11:08:58.748237] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:11:53.798 [2024-07-22 11:08:58.780137] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:11:53.798 [2024-07-22 11:08:58.780315] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:11:54.057 11:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:11:54.057 11:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:11:54.057 11:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:11:54.057 11:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 01:11:54.057 11:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:54.057 11:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:11:54.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:11:54.057 11:08:59 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=86099 01:11:54.057 11:08:59 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 86099 /var/tmp/bdevperf.sock 01:11:54.057 11:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 86099 ']' 01:11:54.057 11:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:11:54.057 11:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:11:54.057 11:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:11:54.057 11:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:11:54.057 11:08:59 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 01:11:54.057 11:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:54.317 11:08:59 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 01:11:54.317 "subsystems": [ 01:11:54.317 { 01:11:54.317 "subsystem": "keyring", 01:11:54.317 "config": [ 01:11:54.317 { 01:11:54.317 "method": "keyring_file_add_key", 01:11:54.317 "params": { 01:11:54.317 "name": "key0", 01:11:54.317 "path": "/tmp/tmp.srjxyziNbl" 01:11:54.317 } 01:11:54.317 } 01:11:54.317 ] 01:11:54.317 }, 01:11:54.317 { 01:11:54.317 "subsystem": "iobuf", 01:11:54.317 "config": [ 01:11:54.317 { 01:11:54.317 "method": "iobuf_set_options", 01:11:54.317 "params": { 01:11:54.317 "small_pool_count": 8192, 01:11:54.317 "large_pool_count": 1024, 01:11:54.317 "small_bufsize": 8192, 01:11:54.317 "large_bufsize": 135168 01:11:54.317 } 01:11:54.317 } 01:11:54.317 ] 01:11:54.317 }, 01:11:54.317 { 01:11:54.317 "subsystem": "sock", 01:11:54.317 "config": [ 01:11:54.317 { 01:11:54.317 "method": "sock_set_default_impl", 01:11:54.317 "params": { 01:11:54.317 "impl_name": "uring" 01:11:54.317 } 01:11:54.317 }, 01:11:54.317 { 01:11:54.317 "method": "sock_impl_set_options", 01:11:54.317 "params": { 01:11:54.317 "impl_name": "ssl", 01:11:54.317 "recv_buf_size": 4096, 01:11:54.317 "send_buf_size": 4096, 01:11:54.317 "enable_recv_pipe": true, 01:11:54.317 "enable_quickack": false, 01:11:54.317 "enable_placement_id": 0, 01:11:54.317 "enable_zerocopy_send_server": true, 01:11:54.317 "enable_zerocopy_send_client": false, 01:11:54.317 "zerocopy_threshold": 0, 01:11:54.317 "tls_version": 0, 01:11:54.317 "enable_ktls": false 01:11:54.317 } 01:11:54.317 }, 01:11:54.317 { 01:11:54.317 "method": "sock_impl_set_options", 01:11:54.317 "params": { 01:11:54.317 "impl_name": "posix", 01:11:54.317 "recv_buf_size": 2097152, 01:11:54.317 "send_buf_size": 2097152, 01:11:54.317 "enable_recv_pipe": true, 01:11:54.317 "enable_quickack": false, 01:11:54.317 "enable_placement_id": 0, 01:11:54.317 "enable_zerocopy_send_server": true, 01:11:54.317 "enable_zerocopy_send_client": false, 01:11:54.317 "zerocopy_threshold": 0, 01:11:54.317 "tls_version": 0, 01:11:54.317 "enable_ktls": false 01:11:54.317 } 01:11:54.317 }, 01:11:54.317 { 01:11:54.317 "method": "sock_impl_set_options", 01:11:54.317 "params": { 01:11:54.317 "impl_name": "uring", 01:11:54.317 "recv_buf_size": 2097152, 01:11:54.317 "send_buf_size": 2097152, 01:11:54.317 "enable_recv_pipe": true, 01:11:54.317 "enable_quickack": false, 01:11:54.317 "enable_placement_id": 0, 01:11:54.317 "enable_zerocopy_send_server": false, 01:11:54.317 "enable_zerocopy_send_client": false, 01:11:54.317 "zerocopy_threshold": 0, 01:11:54.317 "tls_version": 0, 01:11:54.317 "enable_ktls": false 01:11:54.317 } 01:11:54.317 } 01:11:54.317 ] 01:11:54.317 }, 01:11:54.317 { 01:11:54.317 "subsystem": "vmd", 01:11:54.317 "config": [] 01:11:54.317 }, 01:11:54.317 { 01:11:54.317 "subsystem": "accel", 01:11:54.317 "config": [ 01:11:54.317 { 01:11:54.317 "method": "accel_set_options", 01:11:54.317 "params": { 01:11:54.317 "small_cache_size": 128, 01:11:54.317 "large_cache_size": 16, 01:11:54.317 "task_count": 2048, 01:11:54.317 "sequence_count": 2048, 01:11:54.317 "buf_count": 2048 01:11:54.317 } 01:11:54.317 } 01:11:54.317 ] 01:11:54.317 }, 01:11:54.317 { 01:11:54.317 "subsystem": "bdev", 01:11:54.317 "config": [ 01:11:54.317 { 01:11:54.317 "method": "bdev_set_options", 01:11:54.317 "params": { 01:11:54.317 "bdev_io_pool_size": 65535, 01:11:54.317 "bdev_io_cache_size": 256, 01:11:54.317 "bdev_auto_examine": true, 01:11:54.317 "iobuf_small_cache_size": 128, 01:11:54.317 "iobuf_large_cache_size": 16 01:11:54.317 } 01:11:54.317 }, 01:11:54.317 { 01:11:54.317 "method": "bdev_raid_set_options", 01:11:54.317 "params": { 01:11:54.317 "process_window_size_kb": 1024, 01:11:54.317 "process_max_bandwidth_mb_sec": 0 01:11:54.317 } 01:11:54.317 }, 01:11:54.317 { 01:11:54.317 "method": "bdev_iscsi_set_options", 01:11:54.317 "params": { 01:11:54.317 "timeout_sec": 30 01:11:54.317 } 01:11:54.317 }, 01:11:54.317 { 01:11:54.317 "method": "bdev_nvme_set_options", 01:11:54.317 "params": { 01:11:54.317 "action_on_timeout": "none", 01:11:54.317 "timeout_us": 0, 01:11:54.317 "timeout_admin_us": 0, 01:11:54.317 "keep_alive_timeout_ms": 10000, 01:11:54.317 "arbitration_burst": 0, 01:11:54.317 "low_priority_weight": 0, 01:11:54.317 "medium_priority_weight": 0, 01:11:54.317 "high_priority_weight": 0, 01:11:54.317 "nvme_adminq_poll_period_us": 10000, 01:11:54.318 "nvme_ioq_poll_period_us": 0, 01:11:54.318 "io_queue_requests": 512, 01:11:54.318 "delay_cmd_submit": true, 01:11:54.318 "transport_retry_count": 4, 01:11:54.318 "bdev_retry_count": 3, 01:11:54.318 "transport_ack_timeout": 0, 01:11:54.318 "ctrlr_loss_timeout_sec": 0, 01:11:54.318 "reconnect_delay_sec": 0, 01:11:54.318 "fast_io_fail_timeout_sec": 0, 01:11:54.318 "disable_auto_failback": false, 01:11:54.318 "generate_uuids": false, 01:11:54.318 "transport_tos": 0, 01:11:54.318 "nvme_error_stat": false, 01:11:54.318 "rdma_srq_size": 0, 01:11:54.318 "io_path_stat": false, 01:11:54.318 "allow_accel_sequence": false, 01:11:54.318 "rdma_max_cq_size": 0, 01:11:54.318 "rdma_cm_event_timeout_ms": 0, 01:11:54.318 "dhchap_digests": [ 01:11:54.318 "sha256", 01:11:54.318 "sha384", 01:11:54.318 "sha512" 01:11:54.318 ], 01:11:54.318 "dhchap_dhgroups": [ 01:11:54.318 "null", 01:11:54.318 "ffdhe2048", 01:11:54.318 "ffdhe3072", 01:11:54.318 "ffdhe4096", 01:11:54.318 "ffdhe6144", 01:11:54.318 "ffdhe8192" 01:11:54.318 ] 01:11:54.318 } 01:11:54.318 }, 01:11:54.318 { 01:11:54.318 "method": "bdev_nvme_attach_controller", 01:11:54.318 "params": { 01:11:54.318 "name": "nvme0", 01:11:54.318 "trtype": "TCP", 01:11:54.318 "adrfam": "IPv4", 01:11:54.318 "traddr": "10.0.0.2", 01:11:54.318 "trsvcid": "4420", 01:11:54.318 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:11:54.318 "prchk_reftag": false, 01:11:54.318 "prchk_guard": false, 01:11:54.318 "ctrlr_loss_timeout_sec": 0, 01:11:54.318 "reconnect_delay_sec": 0, 01:11:54.318 "fast_io_fail_timeout_sec": 0, 01:11:54.318 "psk": "key0", 01:11:54.318 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:11:54.318 "hdgst": false, 01:11:54.318 "ddgst": false 01:11:54.318 } 01:11:54.318 }, 01:11:54.318 { 01:11:54.318 "method": "bdev_nvme_set_hotplug", 01:11:54.318 "params": { 01:11:54.318 "period_us": 100000, 01:11:54.318 "enable": false 01:11:54.318 } 01:11:54.318 }, 01:11:54.318 { 01:11:54.318 "method": "bdev_enable_histogram", 01:11:54.318 "params": { 01:11:54.318 "name": "nvme0n1", 01:11:54.318 "enable": true 01:11:54.318 } 01:11:54.318 }, 01:11:54.318 { 01:11:54.318 "method": "bdev_wait_for_examine" 01:11:54.318 } 01:11:54.318 ] 01:11:54.318 }, 01:11:54.318 { 01:11:54.318 "subsystem": "nbd", 01:11:54.318 "config": [] 01:11:54.318 } 01:11:54.318 ] 01:11:54.318 }' 01:11:54.318 [2024-07-22 11:08:59.311379] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:11:54.318 [2024-07-22 11:08:59.311562] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86099 ] 01:11:54.318 [2024-07-22 11:08:59.441443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:54.318 [2024-07-22 11:08:59.486245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:11:54.577 [2024-07-22 11:08:59.609165] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:11:54.577 [2024-07-22 11:08:59.643143] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:11:55.157 11:09:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:11:55.157 11:09:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:11:55.157 11:09:00 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:11:55.157 11:09:00 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 01:11:55.157 11:09:00 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:55.157 11:09:00 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:11:55.416 Running I/O for 1 seconds... 01:11:56.378 01:11:56.378 Latency(us) 01:11:56.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:56.378 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:11:56.378 Verification LBA range: start 0x0 length 0x2000 01:11:56.378 nvme0n1 : 1.01 5744.30 22.44 0.00 0.00 22123.80 4684.90 17055.15 01:11:56.378 =================================================================================================================== 01:11:56.378 Total : 5744.30 22.44 0.00 0.00 22123.80 4684.90 17055.15 01:11:56.379 0 01:11:56.379 11:09:01 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 01:11:56.379 11:09:01 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 01:11:56.379 11:09:01 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 01:11:56.379 11:09:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 01:11:56.379 11:09:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 01:11:56.379 11:09:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 01:11:56.379 11:09:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 01:11:56.379 11:09:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 01:11:56.379 11:09:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 01:11:56.379 11:09:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 01:11:56.379 11:09:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 01:11:56.379 nvmf_trace.0 01:11:56.379 11:09:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 01:11:56.379 11:09:01 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 86099 01:11:56.379 11:09:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 86099 ']' 01:11:56.379 11:09:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 86099 01:11:56.379 11:09:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:11:56.379 11:09:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:11:56.379 11:09:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86099 01:11:56.638 11:09:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:11:56.638 11:09:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:11:56.638 killing process with pid 86099 01:11:56.638 11:09:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86099' 01:11:56.638 Received shutdown signal, test time was about 1.000000 seconds 01:11:56.638 01:11:56.638 Latency(us) 01:11:56.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:56.638 =================================================================================================================== 01:11:56.638 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:11:56.638 11:09:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 86099 01:11:56.638 11:09:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 86099 01:11:56.638 11:09:01 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 01:11:56.638 11:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 01:11:56.638 11:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 01:11:56.896 11:09:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:11:56.896 11:09:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 01:11:56.896 11:09:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 01:11:56.896 11:09:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:11:56.896 rmmod nvme_tcp 01:11:56.896 rmmod nvme_fabrics 01:11:56.896 rmmod nvme_keyring 01:11:56.896 11:09:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:11:56.896 11:09:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 01:11:56.896 11:09:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 01:11:56.896 11:09:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 86071 ']' 01:11:56.897 11:09:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 86071 01:11:56.897 11:09:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 86071 ']' 01:11:56.897 11:09:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 86071 01:11:56.897 11:09:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:11:57.173 11:09:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:11:57.173 11:09:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86071 01:11:57.173 11:09:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:11:57.173 killing process with pid 86071 01:11:57.173 11:09:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:11:57.173 11:09:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86071' 01:11:57.173 11:09:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 86071 01:11:57.173 11:09:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 86071 01:11:57.173 11:09:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:11:57.173 11:09:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:11:57.173 11:09:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:11:57.173 11:09:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:11:57.173 11:09:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 01:11:57.173 11:09:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:11:57.173 11:09:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:11:57.173 11:09:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:11:57.432 11:09:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:11:57.432 11:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.qYijhtpULj /tmp/tmp.GVFvE7DAfW /tmp/tmp.srjxyziNbl 01:11:57.432 01:11:57.432 real 1m20.427s 01:11:57.432 user 1m59.463s 01:11:57.432 sys 0m29.896s 01:11:57.432 11:09:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 01:11:57.432 ************************************ 01:11:57.432 END TEST nvmf_tls 01:11:57.432 11:09:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:11:57.432 ************************************ 01:11:57.432 11:09:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:11:57.432 11:09:02 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 01:11:57.432 11:09:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:11:57.432 11:09:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:11:57.432 11:09:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:11:57.432 ************************************ 01:11:57.432 START TEST nvmf_fips 01:11:57.432 ************************************ 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 01:11:57.432 * Looking for test storage... 01:11:57.432 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:11:57.432 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 01:11:57.692 Error setting digest 01:11:57.692 00924C87EE7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 01:11:57.692 00924C87EE7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:11:57.692 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:11:57.949 Cannot find device "nvmf_tgt_br" 01:11:57.949 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 01:11:57.949 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:11:57.949 Cannot find device "nvmf_tgt_br2" 01:11:57.949 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 01:11:57.949 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:11:57.949 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:11:57.949 Cannot find device "nvmf_tgt_br" 01:11:57.949 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 01:11:57.949 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:11:57.949 Cannot find device "nvmf_tgt_br2" 01:11:57.949 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 01:11:57.949 11:09:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:11:57.949 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:11:57.949 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:11:57.949 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:11:57.949 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 01:11:57.949 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:11:57.949 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:11:57.949 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 01:11:57.949 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:11:57.949 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:11:57.949 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:11:57.949 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:11:57.949 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:11:57.949 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:11:57.949 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:11:57.949 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:11:57.949 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:11:57.949 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:11:57.949 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:11:57.949 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:11:57.949 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:11:58.206 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:11:58.206 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:11:58.206 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:11:58.206 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:11:58.206 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:11:58.206 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:11:58.206 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:11:58.206 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:11:58.206 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:11:58.206 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:11:58.207 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:11:58.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:11:58.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 01:11:58.207 01:11:58.207 --- 10.0.0.2 ping statistics --- 01:11:58.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:11:58.207 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 01:11:58.207 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:11:58.207 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:11:58.207 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 01:11:58.207 01:11:58.207 --- 10.0.0.3 ping statistics --- 01:11:58.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:11:58.207 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 01:11:58.207 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:11:58.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:11:58.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 01:11:58.207 01:11:58.207 --- 10.0.0.1 ping statistics --- 01:11:58.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:11:58.207 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 01:11:58.207 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:11:58.207 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 01:11:58.207 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:11:58.207 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:11:58.207 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:11:58.207 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:11:58.207 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:11:58.207 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:11:58.207 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:11:58.207 11:09:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 01:11:58.207 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:11:58.207 11:09:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 01:11:58.207 11:09:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:11:58.207 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=86381 01:11:58.207 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 86381 01:11:58.207 11:09:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:11:58.207 11:09:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 86381 ']' 01:11:58.207 11:09:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:11:58.207 11:09:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 01:11:58.207 11:09:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:11:58.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:11:58.207 11:09:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 01:11:58.207 11:09:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:11:58.207 [2024-07-22 11:09:03.376135] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:11:58.207 [2024-07-22 11:09:03.376200] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:11:58.465 [2024-07-22 11:09:03.517098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:58.465 [2024-07-22 11:09:03.563227] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:11:58.465 [2024-07-22 11:09:03.563271] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:11:58.465 [2024-07-22 11:09:03.563282] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:11:58.465 [2024-07-22 11:09:03.563290] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:11:58.465 [2024-07-22 11:09:03.563297] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:11:58.465 [2024-07-22 11:09:03.563325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:11:58.465 [2024-07-22 11:09:03.604088] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:11:59.038 11:09:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:11:59.038 11:09:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 01:11:59.038 11:09:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:11:59.038 11:09:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 01:11:59.038 11:09:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:11:59.038 11:09:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:11:59.038 11:09:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 01:11:59.038 11:09:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 01:11:59.038 11:09:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 01:11:59.038 11:09:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 01:11:59.038 11:09:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 01:11:59.038 11:09:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 01:11:59.038 11:09:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 01:11:59.038 11:09:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:11:59.297 [2024-07-22 11:09:04.404277] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:11:59.297 [2024-07-22 11:09:04.420223] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:11:59.297 [2024-07-22 11:09:04.420420] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:11:59.297 [2024-07-22 11:09:04.449124] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 01:11:59.297 malloc0 01:11:59.297 11:09:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:11:59.297 11:09:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=86418 01:11:59.297 11:09:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:11:59.297 11:09:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 86418 /var/tmp/bdevperf.sock 01:11:59.297 11:09:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 86418 ']' 01:11:59.297 11:09:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:11:59.297 11:09:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 01:11:59.297 11:09:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:11:59.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:11:59.297 11:09:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 01:11:59.297 11:09:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:11:59.555 [2024-07-22 11:09:04.550098] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:11:59.555 [2024-07-22 11:09:04.550173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86418 ] 01:11:59.555 [2024-07-22 11:09:04.682605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:59.555 [2024-07-22 11:09:04.728927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:11:59.813 [2024-07-22 11:09:04.770537] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:12:00.379 11:09:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:12:00.379 11:09:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 01:12:00.379 11:09:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 01:12:00.379 [2024-07-22 11:09:05.550643] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:12:00.379 [2024-07-22 11:09:05.550743] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 01:12:00.637 TLSTESTn1 01:12:00.638 11:09:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:12:00.638 Running I/O for 10 seconds... 01:12:10.612 01:12:10.612 Latency(us) 01:12:10.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:12:10.612 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:12:10.612 Verification LBA range: start 0x0 length 0x2000 01:12:10.612 TLSTESTn1 : 10.02 5490.67 21.45 0.00 0.00 23270.45 6211.44 31794.17 01:12:10.612 =================================================================================================================== 01:12:10.612 Total : 5490.67 21.45 0.00 0.00 23270.45 6211.44 31794.17 01:12:10.612 0 01:12:10.612 11:09:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 01:12:10.612 11:09:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 01:12:10.612 11:09:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 01:12:10.612 11:09:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 01:12:10.612 11:09:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 01:12:10.612 11:09:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 01:12:10.612 11:09:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 01:12:10.612 11:09:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 01:12:10.612 11:09:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 01:12:10.612 11:09:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 01:12:10.612 nvmf_trace.0 01:12:10.871 11:09:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 01:12:10.871 11:09:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 86418 01:12:10.871 11:09:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 86418 ']' 01:12:10.871 11:09:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 86418 01:12:10.871 11:09:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 01:12:10.871 11:09:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:12:10.871 11:09:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86418 01:12:10.871 killing process with pid 86418 01:12:10.871 Received shutdown signal, test time was about 10.000000 seconds 01:12:10.871 01:12:10.871 Latency(us) 01:12:10.871 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:12:10.871 =================================================================================================================== 01:12:10.871 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:12:10.871 11:09:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:12:10.871 11:09:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:12:10.871 11:09:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86418' 01:12:10.871 11:09:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 86418 01:12:10.871 [2024-07-22 11:09:15.873557] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 01:12:10.871 11:09:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 86418 01:12:11.130 11:09:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 01:12:11.130 11:09:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 01:12:11.130 11:09:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 01:12:11.389 11:09:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:12:11.389 11:09:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 01:12:11.389 11:09:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 01:12:11.389 11:09:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:12:11.389 rmmod nvme_tcp 01:12:11.389 rmmod nvme_fabrics 01:12:11.389 rmmod nvme_keyring 01:12:11.389 11:09:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:12:11.389 11:09:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 01:12:11.389 11:09:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 01:12:11.389 11:09:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 86381 ']' 01:12:11.389 11:09:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 86381 01:12:11.389 11:09:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 86381 ']' 01:12:11.389 11:09:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 86381 01:12:11.389 11:09:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 01:12:11.389 11:09:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:12:11.389 11:09:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86381 01:12:11.389 killing process with pid 86381 01:12:11.389 11:09:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:12:11.389 11:09:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:12:11.389 11:09:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86381' 01:12:11.389 11:09:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 86381 01:12:11.389 [2024-07-22 11:09:16.475691] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 01:12:11.389 11:09:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 86381 01:12:11.751 11:09:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:12:11.751 11:09:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:12:11.751 11:09:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:12:11.751 11:09:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:12:11.751 11:09:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 01:12:11.751 11:09:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:12:11.751 11:09:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:12:11.751 11:09:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:12:11.751 11:09:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:12:11.751 11:09:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 01:12:11.751 01:12:11.751 real 0m14.259s 01:12:11.751 user 0m17.732s 01:12:11.751 sys 0m6.514s 01:12:11.751 11:09:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 01:12:11.751 11:09:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:12:11.751 ************************************ 01:12:11.751 END TEST nvmf_fips 01:12:11.752 ************************************ 01:12:11.752 11:09:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:12:11.752 11:09:16 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 01:12:11.752 11:09:16 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 01:12:11.752 11:09:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:12:11.752 11:09:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:12:11.752 11:09:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:12:11.752 ************************************ 01:12:11.752 START TEST nvmf_fuzz 01:12:11.752 ************************************ 01:12:11.752 11:09:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 01:12:12.016 * Looking for test storage... 01:12:12.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@432 -- # nvmf_veth_init 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:12:12.016 11:09:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:12:12.016 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:12:12.016 Cannot find device "nvmf_tgt_br" 01:12:12.016 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # true 01:12:12.016 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:12:12.016 Cannot find device "nvmf_tgt_br2" 01:12:12.016 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # true 01:12:12.017 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:12:12.017 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:12:12.017 Cannot find device "nvmf_tgt_br" 01:12:12.017 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # true 01:12:12.017 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:12:12.017 Cannot find device "nvmf_tgt_br2" 01:12:12.017 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # true 01:12:12.017 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:12:12.017 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:12:12.017 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:12:12.017 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:12:12.017 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # true 01:12:12.017 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:12:12.017 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:12:12.017 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # true 01:12:12.017 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:12:12.017 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:12:12.017 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:12:12.017 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:12:12.017 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:12:12.275 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:12:12.275 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:12:12.275 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:12:12.275 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:12:12.275 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:12:12.275 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:12:12.275 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:12:12.275 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:12:12.275 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:12:12.275 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:12:12.275 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:12:12.275 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:12:12.275 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:12:12.275 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:12:12.275 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:12:12.276 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:12:12.276 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:12:12.276 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:12:12.276 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:12:12.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:12:12.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 01:12:12.276 01:12:12.276 --- 10.0.0.2 ping statistics --- 01:12:12.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:12:12.276 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 01:12:12.276 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:12:12.276 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:12:12.276 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.138 ms 01:12:12.276 01:12:12.276 --- 10.0.0.3 ping statistics --- 01:12:12.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:12:12.276 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 01:12:12.276 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:12:12.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:12:12.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 01:12:12.276 01:12:12.276 --- 10.0.0.1 ping statistics --- 01:12:12.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:12:12.276 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 01:12:12.276 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:12:12.276 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@433 -- # return 0 01:12:12.276 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:12:12.276 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:12:12.276 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:12:12.276 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:12:12.276 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:12:12.276 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:12:12.276 11:09:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:12:12.276 11:09:17 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=86761 01:12:12.276 11:09:17 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 01:12:12.276 11:09:17 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 01:12:12.276 11:09:17 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 86761 01:12:12.276 11:09:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 86761 ']' 01:12:12.276 11:09:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:12:12.276 11:09:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 01:12:12.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:12:12.276 11:09:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:12:12.276 11:09:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 01:12:12.276 11:09:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 01:12:13.219 11:09:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:12:13.219 11:09:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 01:12:13.219 11:09:18 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:12:13.219 11:09:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:13.219 11:09:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 01:12:13.219 11:09:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:13.219 11:09:18 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 01:12:13.219 11:09:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:13.219 11:09:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 01:12:13.219 Malloc0 01:12:13.219 11:09:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:13.219 11:09:18 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:12:13.219 11:09:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:13.219 11:09:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 01:12:13.219 11:09:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:13.219 11:09:18 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:12:13.219 11:09:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:13.219 11:09:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 01:12:13.219 11:09:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:13.219 11:09:18 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:12:13.219 11:09:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:13.219 11:09:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 01:12:13.219 11:09:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:13.219 11:09:18 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 01:12:13.219 11:09:18 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 01:12:13.478 Shutting down the fuzz application 01:12:13.478 11:09:18 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 01:12:13.736 Shutting down the fuzz application 01:12:13.737 11:09:18 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:12:13.737 11:09:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:13.737 11:09:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 01:12:13.737 11:09:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:13.737 11:09:18 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 01:12:13.737 11:09:18 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 01:12:13.737 11:09:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 01:12:13.737 11:09:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 01:12:13.995 11:09:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:12:13.995 11:09:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 01:12:13.995 11:09:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 01:12:13.995 11:09:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:12:13.995 rmmod nvme_tcp 01:12:13.995 rmmod nvme_fabrics 01:12:13.995 rmmod nvme_keyring 01:12:13.995 11:09:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:12:13.995 11:09:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 01:12:13.995 11:09:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 01:12:13.995 11:09:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 86761 ']' 01:12:13.995 11:09:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 86761 01:12:13.995 11:09:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 86761 ']' 01:12:13.995 11:09:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 86761 01:12:13.995 11:09:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 01:12:13.995 11:09:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:12:13.995 11:09:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86761 01:12:13.995 11:09:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:12:13.995 11:09:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:12:13.995 killing process with pid 86761 01:12:13.995 11:09:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86761' 01:12:13.995 11:09:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 86761 01:12:13.995 11:09:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 86761 01:12:14.265 11:09:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:12:14.265 11:09:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:12:14.265 11:09:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:12:14.265 11:09:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:12:14.265 11:09:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 01:12:14.265 11:09:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:12:14.265 11:09:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:12:14.265 11:09:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:12:14.265 11:09:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:12:14.265 11:09:19 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 01:12:14.265 01:12:14.265 real 0m2.587s 01:12:14.265 user 0m2.341s 01:12:14.265 sys 0m0.686s 01:12:14.265 11:09:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 01:12:14.265 11:09:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 01:12:14.265 ************************************ 01:12:14.265 END TEST nvmf_fuzz 01:12:14.265 ************************************ 01:12:14.265 11:09:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:12:14.265 11:09:19 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 01:12:14.265 11:09:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:12:14.265 11:09:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:12:14.265 11:09:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:12:14.524 ************************************ 01:12:14.524 START TEST nvmf_multiconnection 01:12:14.524 ************************************ 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 01:12:14.524 * Looking for test storage... 01:12:14.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@432 -- # nvmf_veth_init 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:12:14.524 Cannot find device "nvmf_tgt_br" 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # true 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:12:14.524 Cannot find device "nvmf_tgt_br2" 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # true 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:12:14.524 Cannot find device "nvmf_tgt_br" 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # true 01:12:14.524 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:12:14.782 Cannot find device "nvmf_tgt_br2" 01:12:14.782 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # true 01:12:14.782 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:12:14.782 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:12:14.782 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:12:14.782 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:12:14.782 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 01:12:14.782 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:12:14.782 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:12:14.782 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 01:12:14.782 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:12:14.782 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:12:14.782 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:12:14.782 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:12:14.782 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:12:14.782 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:12:14.782 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:12:14.782 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:12:14.782 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:12:14.782 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:12:14.782 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:12:14.782 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:12:14.782 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:12:14.782 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:12:14.782 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:12:14.782 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:12:15.040 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:12:15.040 11:09:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:12:15.040 11:09:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:12:15.040 11:09:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:12:15.040 11:09:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:12:15.040 11:09:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:12:15.040 11:09:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:12:15.040 11:09:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:12:15.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:12:15.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 01:12:15.040 01:12:15.040 --- 10.0.0.2 ping statistics --- 01:12:15.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:12:15.040 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 01:12:15.040 11:09:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:12:15.040 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:12:15.040 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 01:12:15.040 01:12:15.040 --- 10.0.0.3 ping statistics --- 01:12:15.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:12:15.040 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 01:12:15.040 11:09:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:12:15.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:12:15.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 01:12:15.040 01:12:15.040 --- 10.0.0.1 ping statistics --- 01:12:15.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:12:15.040 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 01:12:15.040 11:09:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:12:15.040 11:09:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@433 -- # return 0 01:12:15.040 11:09:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:12:15.040 11:09:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:12:15.040 11:09:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:12:15.040 11:09:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:12:15.040 11:09:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:12:15.040 11:09:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:12:15.040 11:09:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:12:15.040 11:09:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 01:12:15.040 11:09:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:12:15.040 11:09:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 01:12:15.040 11:09:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:15.040 11:09:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=86951 01:12:15.040 11:09:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 86951 01:12:15.040 11:09:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 86951 ']' 01:12:15.040 11:09:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:12:15.040 11:09:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:12:15.040 11:09:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 01:12:15.040 11:09:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:12:15.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:12:15.040 11:09:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 01:12:15.040 11:09:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:15.040 [2024-07-22 11:09:20.182715] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:12:15.040 [2024-07-22 11:09:20.182780] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:12:15.298 [2024-07-22 11:09:20.319961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:12:15.298 [2024-07-22 11:09:20.361581] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:12:15.298 [2024-07-22 11:09:20.361634] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:12:15.298 [2024-07-22 11:09:20.361644] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:12:15.298 [2024-07-22 11:09:20.361654] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:12:15.298 [2024-07-22 11:09:20.361661] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:12:15.298 [2024-07-22 11:09:20.361772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:12:15.298 [2024-07-22 11:09:20.361961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:12:15.298 [2024-07-22 11:09:20.362751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:12:15.298 [2024-07-22 11:09:20.362751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:12:15.298 [2024-07-22 11:09:20.403742] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:12:15.864 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:12:15.864 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 01:12:15.864 11:09:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:12:15.864 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 01:12:15.864 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:15.864 11:09:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:12:15.864 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:12:15.864 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:15.864 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.123 [2024-07-22 11:09:21.073656] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.123 Malloc1 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.123 [2024-07-22 11:09:21.159105] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.123 Malloc2 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.123 Malloc3 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.123 Malloc4 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.123 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.383 Malloc5 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.383 Malloc6 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.383 Malloc7 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.383 Malloc8 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.383 Malloc9 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.383 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.643 Malloc10 01:12:16.643 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.643 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 01:12:16.643 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.643 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.643 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.643 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 01:12:16.643 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.643 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.643 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.643 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 01:12:16.643 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.643 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.643 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.643 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:12:16.643 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 01:12:16.643 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.643 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.643 Malloc11 01:12:16.643 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.643 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 01:12:16.643 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.643 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.643 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.643 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 01:12:16.643 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.643 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.643 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.643 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 01:12:16.643 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.643 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:12:16.643 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.643 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 01:12:16.643 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:12:16.643 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid=7758934d-ca6b-403e-9e5d-3518ecb16acb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 01:12:16.902 11:09:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 01:12:16.902 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 01:12:16.902 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:12:16.902 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:12:16.902 11:09:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 01:12:18.805 11:09:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:12:18.805 11:09:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:12:18.805 11:09:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 01:12:18.805 11:09:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:12:18.805 11:09:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:12:18.805 11:09:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 01:12:18.805 11:09:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:12:18.805 11:09:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid=7758934d-ca6b-403e-9e5d-3518ecb16acb -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 01:12:19.063 11:09:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 01:12:19.063 11:09:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 01:12:19.063 11:09:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:12:19.063 11:09:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:12:19.063 11:09:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 01:12:20.965 11:09:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:12:20.965 11:09:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:12:20.965 11:09:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 01:12:20.965 11:09:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:12:20.965 11:09:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:12:20.965 11:09:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 01:12:20.965 11:09:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:12:20.965 11:09:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid=7758934d-ca6b-403e-9e5d-3518ecb16acb -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 01:12:21.223 11:09:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 01:12:21.223 11:09:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 01:12:21.223 11:09:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:12:21.223 11:09:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:12:21.223 11:09:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 01:12:23.125 11:09:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:12:23.125 11:09:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:12:23.125 11:09:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 01:12:23.125 11:09:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:12:23.125 11:09:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:12:23.125 11:09:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 01:12:23.125 11:09:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:12:23.125 11:09:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid=7758934d-ca6b-403e-9e5d-3518ecb16acb -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 01:12:23.383 11:09:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 01:12:23.383 11:09:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 01:12:23.383 11:09:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:12:23.383 11:09:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:12:23.383 11:09:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 01:12:25.321 11:09:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:12:25.321 11:09:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 01:12:25.321 11:09:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:12:25.321 11:09:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:12:25.321 11:09:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:12:25.321 11:09:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 01:12:25.321 11:09:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:12:25.321 11:09:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid=7758934d-ca6b-403e-9e5d-3518ecb16acb -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 01:12:25.321 11:09:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 01:12:25.321 11:09:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 01:12:25.321 11:09:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:12:25.321 11:09:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:12:25.321 11:09:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 01:12:27.858 11:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:12:27.858 11:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 01:12:27.858 11:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:12:27.858 11:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:12:27.858 11:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:12:27.858 11:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 01:12:27.858 11:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:12:27.858 11:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid=7758934d-ca6b-403e-9e5d-3518ecb16acb -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 01:12:27.858 11:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 01:12:27.858 11:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 01:12:27.858 11:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:12:27.858 11:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:12:27.858 11:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 01:12:29.762 11:09:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:12:29.762 11:09:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:12:29.762 11:09:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 01:12:29.762 11:09:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:12:29.762 11:09:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:12:29.762 11:09:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 01:12:29.762 11:09:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:12:29.762 11:09:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid=7758934d-ca6b-403e-9e5d-3518ecb16acb -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 01:12:29.762 11:09:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 01:12:29.762 11:09:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 01:12:29.762 11:09:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:12:29.762 11:09:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:12:29.762 11:09:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 01:12:31.665 11:09:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:12:31.665 11:09:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:12:31.665 11:09:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 01:12:31.924 11:09:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:12:31.924 11:09:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:12:31.924 11:09:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 01:12:31.924 11:09:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:12:31.924 11:09:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid=7758934d-ca6b-403e-9e5d-3518ecb16acb -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 01:12:31.924 11:09:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 01:12:31.924 11:09:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 01:12:31.924 11:09:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:12:31.924 11:09:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:12:31.924 11:09:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 01:12:34.454 11:09:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:12:34.454 11:09:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:12:34.454 11:09:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 01:12:34.454 11:09:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:12:34.454 11:09:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:12:34.454 11:09:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 01:12:34.454 11:09:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:12:34.454 11:09:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid=7758934d-ca6b-403e-9e5d-3518ecb16acb -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 01:12:34.454 11:09:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 01:12:34.454 11:09:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 01:12:34.454 11:09:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:12:34.454 11:09:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:12:34.454 11:09:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 01:12:36.373 11:09:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:12:36.373 11:09:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:12:36.373 11:09:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 01:12:36.373 11:09:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:12:36.373 11:09:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:12:36.373 11:09:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 01:12:36.373 11:09:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:12:36.373 11:09:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid=7758934d-ca6b-403e-9e5d-3518ecb16acb -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 01:12:36.373 11:09:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 01:12:36.373 11:09:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 01:12:36.373 11:09:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:12:36.373 11:09:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:12:36.373 11:09:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 01:12:38.290 11:09:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:12:38.290 11:09:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:12:38.290 11:09:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 01:12:38.290 11:09:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:12:38.290 11:09:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:12:38.290 11:09:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 01:12:38.290 11:09:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:12:38.290 11:09:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid=7758934d-ca6b-403e-9e5d-3518ecb16acb -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 01:12:38.548 11:09:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 01:12:38.548 11:09:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 01:12:38.548 11:09:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:12:38.548 11:09:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:12:38.548 11:09:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 01:12:40.447 11:09:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:12:40.447 11:09:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:12:40.447 11:09:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 01:12:40.705 11:09:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:12:40.705 11:09:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:12:40.705 11:09:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 01:12:40.705 11:09:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 01:12:40.705 [global] 01:12:40.705 thread=1 01:12:40.705 invalidate=1 01:12:40.705 rw=read 01:12:40.705 time_based=1 01:12:40.705 runtime=10 01:12:40.705 ioengine=libaio 01:12:40.705 direct=1 01:12:40.705 bs=262144 01:12:40.705 iodepth=64 01:12:40.705 norandommap=1 01:12:40.705 numjobs=1 01:12:40.705 01:12:40.705 [job0] 01:12:40.705 filename=/dev/nvme0n1 01:12:40.705 [job1] 01:12:40.705 filename=/dev/nvme10n1 01:12:40.705 [job2] 01:12:40.705 filename=/dev/nvme1n1 01:12:40.705 [job3] 01:12:40.705 filename=/dev/nvme2n1 01:12:40.705 [job4] 01:12:40.705 filename=/dev/nvme3n1 01:12:40.705 [job5] 01:12:40.705 filename=/dev/nvme4n1 01:12:40.705 [job6] 01:12:40.705 filename=/dev/nvme5n1 01:12:40.705 [job7] 01:12:40.705 filename=/dev/nvme6n1 01:12:40.705 [job8] 01:12:40.705 filename=/dev/nvme7n1 01:12:40.705 [job9] 01:12:40.705 filename=/dev/nvme8n1 01:12:40.705 [job10] 01:12:40.705 filename=/dev/nvme9n1 01:12:40.963 Could not set queue depth (nvme0n1) 01:12:40.963 Could not set queue depth (nvme10n1) 01:12:40.963 Could not set queue depth (nvme1n1) 01:12:40.963 Could not set queue depth (nvme2n1) 01:12:40.963 Could not set queue depth (nvme3n1) 01:12:40.963 Could not set queue depth (nvme4n1) 01:12:40.963 Could not set queue depth (nvme5n1) 01:12:40.963 Could not set queue depth (nvme6n1) 01:12:40.963 Could not set queue depth (nvme7n1) 01:12:40.963 Could not set queue depth (nvme8n1) 01:12:40.963 Could not set queue depth (nvme9n1) 01:12:40.963 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:12:40.963 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:12:40.963 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:12:40.963 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:12:40.963 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:12:40.963 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:12:40.963 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:12:40.963 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:12:40.963 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:12:40.963 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:12:40.963 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:12:40.963 fio-3.35 01:12:40.963 Starting 11 threads 01:12:53.162 01:12:53.162 job0: (groupid=0, jobs=1): err= 0: pid=87410: Mon Jul 22 11:09:56 2024 01:12:53.162 read: IOPS=379, BW=94.8MiB/s (99.5MB/s)(961MiB/10127msec) 01:12:53.162 slat (usec): min=24, max=66086, avg=2559.24, stdev=6511.50 01:12:53.162 clat (msec): min=38, max=292, avg=165.88, stdev=31.90 01:12:53.162 lat (msec): min=39, max=292, avg=168.44, stdev=32.75 01:12:53.162 clat percentiles (msec): 01:12:53.162 | 1.00th=[ 83], 5.00th=[ 108], 10.00th=[ 127], 20.00th=[ 142], 01:12:53.162 | 30.00th=[ 157], 40.00th=[ 163], 50.00th=[ 169], 60.00th=[ 174], 01:12:53.162 | 70.00th=[ 182], 80.00th=[ 190], 90.00th=[ 199], 95.00th=[ 220], 01:12:53.162 | 99.00th=[ 241], 99.50th=[ 249], 99.90th=[ 284], 99.95th=[ 292], 01:12:53.162 | 99.99th=[ 292] 01:12:53.162 bw ( KiB/s): min=71536, max=143360, per=5.62%, avg=96658.00, stdev=16584.68, samples=20 01:12:53.162 iops : min= 279, max= 560, avg=377.50, stdev=64.83, samples=20 01:12:53.162 lat (msec) : 50=0.44%, 100=3.07%, 250=96.02%, 500=0.47% 01:12:53.162 cpu : usr=0.17%, sys=2.21%, ctx=982, majf=0, minf=4097 01:12:53.162 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 01:12:53.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:53.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:12:53.162 issued rwts: total=3842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:12:53.162 latency : target=0, window=0, percentile=100.00%, depth=64 01:12:53.162 job1: (groupid=0, jobs=1): err= 0: pid=87411: Mon Jul 22 11:09:56 2024 01:12:53.162 read: IOPS=827, BW=207MiB/s (217MB/s)(2082MiB/10065msec) 01:12:53.162 slat (usec): min=24, max=52088, avg=1185.57, stdev=2867.46 01:12:53.162 clat (msec): min=19, max=159, avg=76.04, stdev=23.27 01:12:53.162 lat (msec): min=20, max=160, avg=77.22, stdev=23.57 01:12:53.162 clat percentiles (msec): 01:12:53.162 | 1.00th=[ 32], 5.00th=[ 35], 10.00th=[ 38], 20.00th=[ 62], 01:12:53.162 | 30.00th=[ 67], 40.00th=[ 70], 50.00th=[ 74], 60.00th=[ 83], 01:12:53.162 | 70.00th=[ 90], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 114], 01:12:53.162 | 99.00th=[ 127], 99.50th=[ 136], 99.90th=[ 153], 99.95th=[ 153], 01:12:53.162 | 99.99th=[ 161] 01:12:53.162 bw ( KiB/s): min=141312, max=394752, per=12.30%, avg=211477.50, stdev=65884.30, samples=20 01:12:53.162 iops : min= 552, max= 1542, avg=825.90, stdev=257.47, samples=20 01:12:53.162 lat (msec) : 20=0.01%, 50=14.62%, 100=70.62%, 250=14.74% 01:12:53.162 cpu : usr=0.50%, sys=4.55%, ctx=1846, majf=0, minf=4097 01:12:53.162 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 01:12:53.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:53.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:12:53.162 issued rwts: total=8329,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:12:53.162 latency : target=0, window=0, percentile=100.00%, depth=64 01:12:53.162 job2: (groupid=0, jobs=1): err= 0: pid=87413: Mon Jul 22 11:09:56 2024 01:12:53.162 read: IOPS=628, BW=157MiB/s (165MB/s)(1593MiB/10138msec) 01:12:53.162 slat (usec): min=17, max=118536, avg=1529.34, stdev=5012.12 01:12:53.162 clat (msec): min=7, max=326, avg=100.08, stdev=51.33 01:12:53.162 lat (msec): min=8, max=326, avg=101.61, stdev=52.16 01:12:53.162 clat percentiles (msec): 01:12:53.162 | 1.00th=[ 34], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 64], 01:12:53.162 | 30.00th=[ 67], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 84], 01:12:53.162 | 70.00th=[ 107], 80.00th=[ 165], 90.00th=[ 182], 95.00th=[ 203], 01:12:53.162 | 99.00th=[ 232], 99.50th=[ 255], 99.90th=[ 305], 99.95th=[ 305], 01:12:53.162 | 99.99th=[ 326] 01:12:53.162 bw ( KiB/s): min=77668, max=266240, per=9.39%, avg=161398.20, stdev=68944.88, samples=20 01:12:53.162 iops : min= 303, max= 1040, avg=630.35, stdev=269.34, samples=20 01:12:53.162 lat (msec) : 10=0.08%, 20=0.16%, 50=1.90%, 100=65.29%, 250=31.92% 01:12:53.162 lat (msec) : 500=0.66% 01:12:53.162 cpu : usr=0.34%, sys=3.47%, ctx=1446, majf=0, minf=4097 01:12:53.162 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 01:12:53.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:53.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:12:53.162 issued rwts: total=6373,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:12:53.162 latency : target=0, window=0, percentile=100.00%, depth=64 01:12:53.162 job3: (groupid=0, jobs=1): err= 0: pid=87415: Mon Jul 22 11:09:56 2024 01:12:53.162 read: IOPS=964, BW=241MiB/s (253MB/s)(2415MiB/10019msec) 01:12:53.162 slat (usec): min=15, max=49768, avg=1011.72, stdev=2413.89 01:12:53.162 clat (msec): min=5, max=170, avg=65.24, stdev=23.06 01:12:53.162 lat (msec): min=8, max=174, avg=66.25, stdev=23.39 01:12:53.162 clat percentiles (msec): 01:12:53.162 | 1.00th=[ 28], 5.00th=[ 41], 10.00th=[ 43], 20.00th=[ 45], 01:12:53.162 | 30.00th=[ 51], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 65], 01:12:53.162 | 70.00th=[ 68], 80.00th=[ 87], 90.00th=[ 97], 95.00th=[ 111], 01:12:53.162 | 99.00th=[ 142], 99.50th=[ 150], 99.90th=[ 161], 99.95th=[ 161], 01:12:53.162 | 99.99th=[ 171] 01:12:53.162 bw ( KiB/s): min=106496, max=375808, per=14.28%, avg=245583.55, stdev=78215.61, samples=20 01:12:53.162 iops : min= 416, max= 1468, avg=959.20, stdev=305.51, samples=20 01:12:53.163 lat (msec) : 10=0.09%, 20=0.36%, 50=29.11%, 100=62.78%, 250=7.65% 01:12:53.163 cpu : usr=0.46%, sys=5.04%, ctx=2141, majf=0, minf=4097 01:12:53.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 01:12:53.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:53.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:12:53.163 issued rwts: total=9660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:12:53.163 latency : target=0, window=0, percentile=100.00%, depth=64 01:12:53.163 job4: (groupid=0, jobs=1): err= 0: pid=87419: Mon Jul 22 11:09:56 2024 01:12:53.163 read: IOPS=371, BW=93.0MiB/s (97.5MB/s)(942MiB/10128msec) 01:12:53.163 slat (usec): min=23, max=85468, avg=2583.01, stdev=6826.33 01:12:53.163 clat (msec): min=37, max=312, avg=169.14, stdev=29.75 01:12:53.163 lat (msec): min=38, max=322, avg=171.72, stdev=30.51 01:12:53.163 clat percentiles (msec): 01:12:53.163 | 1.00th=[ 78], 5.00th=[ 123], 10.00th=[ 136], 20.00th=[ 150], 01:12:53.163 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 169], 60.00th=[ 174], 01:12:53.163 | 70.00th=[ 182], 80.00th=[ 190], 90.00th=[ 201], 95.00th=[ 222], 01:12:53.163 | 99.00th=[ 241], 99.50th=[ 279], 99.90th=[ 305], 99.95th=[ 313], 01:12:53.163 | 99.99th=[ 313] 01:12:53.163 bw ( KiB/s): min=72192, max=128000, per=5.51%, avg=94784.00, stdev=13705.89, samples=20 01:12:53.163 iops : min= 282, max= 500, avg=370.20, stdev=53.54, samples=20 01:12:53.163 lat (msec) : 50=0.13%, 100=1.41%, 250=97.64%, 500=0.82% 01:12:53.163 cpu : usr=0.25%, sys=2.12%, ctx=973, majf=0, minf=4097 01:12:53.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 01:12:53.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:53.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:12:53.163 issued rwts: total=3767,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:12:53.163 latency : target=0, window=0, percentile=100.00%, depth=64 01:12:53.163 job5: (groupid=0, jobs=1): err= 0: pid=87421: Mon Jul 22 11:09:56 2024 01:12:53.163 read: IOPS=459, BW=115MiB/s (120MB/s)(1164MiB/10132msec) 01:12:53.163 slat (usec): min=16, max=70506, avg=2052.29, stdev=5307.00 01:12:53.163 clat (msec): min=7, max=304, avg=136.98, stdev=48.00 01:12:53.163 lat (msec): min=7, max=304, avg=139.03, stdev=48.89 01:12:53.163 clat percentiles (msec): 01:12:53.163 | 1.00th=[ 31], 5.00th=[ 83], 10.00th=[ 89], 20.00th=[ 94], 01:12:53.163 | 30.00th=[ 99], 40.00th=[ 107], 50.00th=[ 126], 60.00th=[ 165], 01:12:53.163 | 70.00th=[ 176], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 218], 01:12:53.163 | 99.00th=[ 239], 99.50th=[ 245], 99.90th=[ 305], 99.95th=[ 305], 01:12:53.163 | 99.99th=[ 305] 01:12:53.163 bw ( KiB/s): min=72192, max=175616, per=6.83%, avg=117509.40, stdev=37195.86, samples=20 01:12:53.163 iops : min= 282, max= 686, avg=458.95, stdev=145.26, samples=20 01:12:53.163 lat (msec) : 10=0.11%, 20=0.11%, 50=1.98%, 100=30.68%, 250=66.79% 01:12:53.163 lat (msec) : 500=0.34% 01:12:53.163 cpu : usr=0.20%, sys=2.52%, ctx=1187, majf=0, minf=4097 01:12:53.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 01:12:53.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:53.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:12:53.163 issued rwts: total=4655,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:12:53.163 latency : target=0, window=0, percentile=100.00%, depth=64 01:12:53.163 job6: (groupid=0, jobs=1): err= 0: pid=87422: Mon Jul 22 11:09:56 2024 01:12:53.163 read: IOPS=615, BW=154MiB/s (161MB/s)(1541MiB/10016msec) 01:12:53.163 slat (usec): min=17, max=130740, avg=1586.38, stdev=5779.36 01:12:53.163 clat (msec): min=3, max=353, avg=102.23, stdev=55.74 01:12:53.163 lat (msec): min=3, max=353, avg=103.82, stdev=56.76 01:12:53.163 clat percentiles (msec): 01:12:53.163 | 1.00th=[ 28], 5.00th=[ 56], 10.00th=[ 58], 20.00th=[ 61], 01:12:53.163 | 30.00th=[ 63], 40.00th=[ 65], 50.00th=[ 69], 60.00th=[ 74], 01:12:53.163 | 70.00th=[ 148], 80.00th=[ 163], 90.00th=[ 192], 95.00th=[ 203], 01:12:53.163 | 99.00th=[ 228], 99.50th=[ 236], 99.90th=[ 264], 99.95th=[ 268], 01:12:53.163 | 99.99th=[ 355] 01:12:53.163 bw ( KiB/s): min=74240, max=267752, per=9.08%, avg=156153.80, stdev=77439.23, samples=20 01:12:53.163 iops : min= 290, max= 1045, avg=609.80, stdev=302.38, samples=20 01:12:53.163 lat (msec) : 4=0.05%, 10=0.13%, 20=0.31%, 50=1.83%, 100=62.48% 01:12:53.163 lat (msec) : 250=35.09%, 500=0.11% 01:12:53.163 cpu : usr=0.37%, sys=3.22%, ctx=1427, majf=0, minf=4097 01:12:53.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 01:12:53.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:53.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:12:53.163 issued rwts: total=6164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:12:53.163 latency : target=0, window=0, percentile=100.00%, depth=64 01:12:53.163 job7: (groupid=0, jobs=1): err= 0: pid=87423: Mon Jul 22 11:09:56 2024 01:12:53.163 read: IOPS=921, BW=230MiB/s (242MB/s)(2319MiB/10059msec) 01:12:53.163 slat (usec): min=16, max=133692, avg=1043.21, stdev=2975.74 01:12:53.163 clat (msec): min=3, max=208, avg=68.24, stdev=31.16 01:12:53.163 lat (msec): min=3, max=265, avg=69.28, stdev=31.58 01:12:53.163 clat percentiles (msec): 01:12:53.163 | 1.00th=[ 26], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 43], 01:12:53.163 | 30.00th=[ 45], 40.00th=[ 47], 50.00th=[ 60], 60.00th=[ 72], 01:12:53.163 | 70.00th=[ 86], 80.00th=[ 94], 90.00th=[ 104], 95.00th=[ 131], 01:12:53.163 | 99.00th=[ 171], 99.50th=[ 188], 99.90th=[ 209], 99.95th=[ 209], 01:12:53.163 | 99.99th=[ 209] 01:12:53.163 bw ( KiB/s): min=82084, max=379392, per=13.71%, avg=235738.00, stdev=100280.37, samples=20 01:12:53.163 iops : min= 320, max= 1482, avg=920.75, stdev=391.81, samples=20 01:12:53.163 lat (msec) : 4=0.03%, 10=0.22%, 20=0.45%, 50=45.45%, 100=41.22% 01:12:53.163 lat (msec) : 250=12.63% 01:12:53.163 cpu : usr=0.39%, sys=4.89%, ctx=2127, majf=0, minf=4097 01:12:53.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 01:12:53.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:53.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:12:53.163 issued rwts: total=9274,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:12:53.163 latency : target=0, window=0, percentile=100.00%, depth=64 01:12:53.163 job8: (groupid=0, jobs=1): err= 0: pid=87424: Mon Jul 22 11:09:56 2024 01:12:53.163 read: IOPS=477, BW=119MiB/s (125MB/s)(1201MiB/10057msec) 01:12:53.163 slat (usec): min=32, max=71000, avg=1982.41, stdev=5057.21 01:12:53.163 clat (msec): min=8, max=247, avg=131.74, stdev=44.11 01:12:53.163 lat (msec): min=8, max=264, avg=133.73, stdev=44.78 01:12:53.163 clat percentiles (msec): 01:12:53.163 | 1.00th=[ 53], 5.00th=[ 84], 10.00th=[ 88], 20.00th=[ 92], 01:12:53.163 | 30.00th=[ 97], 40.00th=[ 104], 50.00th=[ 117], 60.00th=[ 144], 01:12:53.163 | 70.00th=[ 161], 80.00th=[ 176], 90.00th=[ 194], 95.00th=[ 213], 01:12:53.163 | 99.00th=[ 228], 99.50th=[ 232], 99.90th=[ 241], 99.95th=[ 241], 01:12:53.163 | 99.99th=[ 247] 01:12:53.163 bw ( KiB/s): min=73580, max=178688, per=7.05%, avg=121288.95, stdev=38297.60, samples=20 01:12:53.163 iops : min= 287, max= 698, avg=473.65, stdev=149.59, samples=20 01:12:53.163 lat (msec) : 10=0.06%, 20=0.27%, 50=0.48%, 100=34.49%, 250=64.70% 01:12:53.163 cpu : usr=0.26%, sys=2.57%, ctx=1258, majf=0, minf=4097 01:12:53.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 01:12:53.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:53.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:12:53.163 issued rwts: total=4804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:12:53.163 latency : target=0, window=0, percentile=100.00%, depth=64 01:12:53.163 job9: (groupid=0, jobs=1): err= 0: pid=87425: Mon Jul 22 11:09:56 2024 01:12:53.163 read: IOPS=739, BW=185MiB/s (194MB/s)(1873MiB/10137msec) 01:12:53.163 slat (usec): min=16, max=57371, avg=1299.33, stdev=3473.88 01:12:53.163 clat (msec): min=7, max=319, avg=85.08, stdev=46.92 01:12:53.163 lat (msec): min=7, max=319, avg=86.38, stdev=47.60 01:12:53.163 clat percentiles (msec): 01:12:53.163 | 1.00th=[ 31], 5.00th=[ 33], 10.00th=[ 35], 20.00th=[ 37], 01:12:53.163 | 30.00th=[ 60], 40.00th=[ 67], 50.00th=[ 73], 60.00th=[ 90], 01:12:53.163 | 70.00th=[ 97], 80.00th=[ 112], 90.00th=[ 171], 95.00th=[ 180], 01:12:53.163 | 99.00th=[ 194], 99.50th=[ 205], 99.90th=[ 309], 99.95th=[ 309], 01:12:53.163 | 99.99th=[ 321] 01:12:53.163 bw ( KiB/s): min=84822, max=463968, per=11.05%, avg=190065.35, stdev=104270.52, samples=20 01:12:53.163 iops : min= 331, max= 1812, avg=742.30, stdev=407.31, samples=20 01:12:53.163 lat (msec) : 10=0.12%, 20=0.08%, 50=25.04%, 100=48.95%, 250=25.52% 01:12:53.163 lat (msec) : 500=0.29% 01:12:53.163 cpu : usr=0.47%, sys=3.86%, ctx=1728, majf=0, minf=4097 01:12:53.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 01:12:53.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:53.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:12:53.163 issued rwts: total=7492,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:12:53.163 latency : target=0, window=0, percentile=100.00%, depth=64 01:12:53.163 job10: (groupid=0, jobs=1): err= 0: pid=87426: Mon Jul 22 11:09:56 2024 01:12:53.163 read: IOPS=368, BW=92.2MiB/s (96.7MB/s)(934MiB/10131msec) 01:12:53.163 slat (usec): min=16, max=64885, avg=2629.93, stdev=6447.70 01:12:53.163 clat (msec): min=51, max=336, avg=170.62, stdev=28.62 01:12:53.163 lat (msec): min=52, max=363, avg=173.25, stdev=29.46 01:12:53.163 clat percentiles (msec): 01:12:53.163 | 1.00th=[ 102], 5.00th=[ 131], 10.00th=[ 138], 20.00th=[ 150], 01:12:53.163 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 176], 01:12:53.163 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 201], 95.00th=[ 222], 01:12:53.163 | 99.00th=[ 255], 99.50th=[ 279], 99.90th=[ 321], 99.95th=[ 338], 01:12:53.163 | 99.99th=[ 338] 01:12:53.163 bw ( KiB/s): min=72704, max=122880, per=5.46%, avg=93967.25, stdev=13066.58, samples=20 01:12:53.163 iops : min= 284, max= 480, avg=366.95, stdev=51.11, samples=20 01:12:53.163 lat (msec) : 100=0.83%, 250=97.94%, 500=1.23% 01:12:53.163 cpu : usr=0.14%, sys=2.11%, ctx=944, majf=0, minf=4097 01:12:53.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 01:12:53.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:53.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:12:53.163 issued rwts: total=3736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:12:53.163 latency : target=0, window=0, percentile=100.00%, depth=64 01:12:53.163 01:12:53.163 Run status group 0 (all jobs): 01:12:53.163 READ: bw=1679MiB/s (1761MB/s), 92.2MiB/s-241MiB/s (96.7MB/s-253MB/s), io=16.6GiB (17.8GB), run=10016-10138msec 01:12:53.163 01:12:53.163 Disk stats (read/write): 01:12:53.163 nvme0n1: ios=7557/0, merge=0/0, ticks=1222998/0, in_queue=1222998, util=97.97% 01:12:53.163 nvme10n1: ios=16562/0, merge=0/0, ticks=1234393/0, in_queue=1234393, util=98.25% 01:12:53.163 nvme1n1: ios=12632/0, merge=0/0, ticks=1225557/0, in_queue=1225557, util=98.38% 01:12:53.164 nvme2n1: ios=18806/0, merge=0/0, ticks=1205218/0, in_queue=1205218, util=98.23% 01:12:53.164 nvme3n1: ios=7426/0, merge=0/0, ticks=1223590/0, in_queue=1223590, util=98.33% 01:12:53.164 nvme4n1: ios=9194/0, merge=0/0, ticks=1225671/0, in_queue=1225671, util=98.48% 01:12:53.164 nvme5n1: ios=11803/0, merge=0/0, ticks=1203629/0, in_queue=1203629, util=98.52% 01:12:53.164 nvme6n1: ios=18457/0, merge=0/0, ticks=1233547/0, in_queue=1233547, util=98.64% 01:12:53.164 nvme7n1: ios=9508/0, merge=0/0, ticks=1231235/0, in_queue=1231235, util=98.74% 01:12:53.164 nvme8n1: ios=14885/0, merge=0/0, ticks=1229440/0, in_queue=1229440, util=99.00% 01:12:53.164 nvme9n1: ios=7375/0, merge=0/0, ticks=1225250/0, in_queue=1225250, util=99.15% 01:12:53.164 11:09:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 01:12:53.164 [global] 01:12:53.164 thread=1 01:12:53.164 invalidate=1 01:12:53.164 rw=randwrite 01:12:53.164 time_based=1 01:12:53.164 runtime=10 01:12:53.164 ioengine=libaio 01:12:53.164 direct=1 01:12:53.164 bs=262144 01:12:53.164 iodepth=64 01:12:53.164 norandommap=1 01:12:53.164 numjobs=1 01:12:53.164 01:12:53.164 [job0] 01:12:53.164 filename=/dev/nvme0n1 01:12:53.164 [job1] 01:12:53.164 filename=/dev/nvme10n1 01:12:53.164 [job2] 01:12:53.164 filename=/dev/nvme1n1 01:12:53.164 [job3] 01:12:53.164 filename=/dev/nvme2n1 01:12:53.164 [job4] 01:12:53.164 filename=/dev/nvme3n1 01:12:53.164 [job5] 01:12:53.164 filename=/dev/nvme4n1 01:12:53.164 [job6] 01:12:53.164 filename=/dev/nvme5n1 01:12:53.164 [job7] 01:12:53.164 filename=/dev/nvme6n1 01:12:53.164 [job8] 01:12:53.164 filename=/dev/nvme7n1 01:12:53.164 [job9] 01:12:53.164 filename=/dev/nvme8n1 01:12:53.164 [job10] 01:12:53.164 filename=/dev/nvme9n1 01:12:53.164 Could not set queue depth (nvme0n1) 01:12:53.164 Could not set queue depth (nvme10n1) 01:12:53.164 Could not set queue depth (nvme1n1) 01:12:53.164 Could not set queue depth (nvme2n1) 01:12:53.164 Could not set queue depth (nvme3n1) 01:12:53.164 Could not set queue depth (nvme4n1) 01:12:53.164 Could not set queue depth (nvme5n1) 01:12:53.164 Could not set queue depth (nvme6n1) 01:12:53.164 Could not set queue depth (nvme7n1) 01:12:53.164 Could not set queue depth (nvme8n1) 01:12:53.164 Could not set queue depth (nvme9n1) 01:12:53.164 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:12:53.164 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:12:53.164 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:12:53.164 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:12:53.164 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:12:53.164 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:12:53.164 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:12:53.164 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:12:53.164 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:12:53.164 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:12:53.164 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:12:53.164 fio-3.35 01:12:53.164 Starting 11 threads 01:13:03.135 01:13:03.135 job0: (groupid=0, jobs=1): err= 0: pid=87631: Mon Jul 22 11:10:07 2024 01:13:03.135 write: IOPS=663, BW=166MiB/s (174MB/s)(1674MiB/10090msec); 0 zone resets 01:13:03.135 slat (usec): min=28, max=43334, avg=1488.03, stdev=2522.38 01:13:03.135 clat (msec): min=9, max=180, avg=94.92, stdev=12.99 01:13:03.135 lat (msec): min=9, max=180, avg=96.41, stdev=12.99 01:13:03.135 clat percentiles (msec): 01:13:03.135 | 1.00th=[ 86], 5.00th=[ 87], 10.00th=[ 88], 20.00th=[ 89], 01:13:03.135 | 30.00th=[ 91], 40.00th=[ 92], 50.00th=[ 93], 60.00th=[ 94], 01:13:03.135 | 70.00th=[ 94], 80.00th=[ 95], 90.00th=[ 99], 95.00th=[ 128], 01:13:03.135 | 99.00th=[ 155], 99.50th=[ 171], 99.90th=[ 180], 99.95th=[ 180], 01:13:03.135 | 99.99th=[ 180] 01:13:03.135 bw ( KiB/s): min=117248, max=178688, per=14.01%, avg=169752.80, stdev=16380.16, samples=20 01:13:03.135 iops : min= 458, max= 698, avg=663.00, stdev=63.97, samples=20 01:13:03.135 lat (msec) : 10=0.06%, 20=0.12%, 50=0.24%, 100=90.56%, 250=9.02% 01:13:03.135 cpu : usr=2.41%, sys=2.50%, ctx=8421, majf=0, minf=1 01:13:03.135 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 01:13:03.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:03.135 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:13:03.135 issued rwts: total=0,6696,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:03.135 latency : target=0, window=0, percentile=100.00%, depth=64 01:13:03.135 job1: (groupid=0, jobs=1): err= 0: pid=87632: Mon Jul 22 11:10:07 2024 01:13:03.135 write: IOPS=481, BW=120MiB/s (126MB/s)(1222MiB/10145msec); 0 zone resets 01:13:03.135 slat (usec): min=18, max=41726, avg=2039.27, stdev=3529.39 01:13:03.135 clat (msec): min=43, max=308, avg=130.74, stdev=23.63 01:13:03.135 lat (msec): min=43, max=308, avg=132.78, stdev=23.70 01:13:03.135 clat percentiles (msec): 01:13:03.135 | 1.00th=[ 87], 5.00th=[ 92], 10.00th=[ 95], 20.00th=[ 122], 01:13:03.135 | 30.00th=[ 126], 40.00th=[ 128], 50.00th=[ 129], 60.00th=[ 130], 01:13:03.135 | 70.00th=[ 131], 80.00th=[ 150], 90.00th=[ 165], 95.00th=[ 167], 01:13:03.135 | 99.00th=[ 186], 99.50th=[ 247], 99.90th=[ 300], 99.95th=[ 300], 01:13:03.135 | 99.99th=[ 309] 01:13:03.136 bw ( KiB/s): min=96768, max=162304, per=10.20%, avg=123520.00, stdev=17933.37, samples=20 01:13:03.136 iops : min= 378, max= 634, avg=482.50, stdev=70.05, samples=20 01:13:03.136 lat (msec) : 50=0.16%, 100=11.31%, 250=88.07%, 500=0.45% 01:13:03.136 cpu : usr=1.74%, sys=1.73%, ctx=6027, majf=0, minf=1 01:13:03.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 01:13:03.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:03.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:13:03.136 issued rwts: total=0,4888,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:03.136 latency : target=0, window=0, percentile=100.00%, depth=64 01:13:03.136 job2: (groupid=0, jobs=1): err= 0: pid=87633: Mon Jul 22 11:10:07 2024 01:13:03.136 write: IOPS=486, BW=122MiB/s (128MB/s)(1235MiB/10156msec); 0 zone resets 01:13:03.136 slat (usec): min=20, max=11321, avg=2019.44, stdev=3454.99 01:13:03.136 clat (msec): min=4, max=313, avg=129.47, stdev=26.04 01:13:03.136 lat (msec): min=4, max=313, avg=131.48, stdev=26.23 01:13:03.136 clat percentiles (msec): 01:13:03.136 | 1.00th=[ 52], 5.00th=[ 90], 10.00th=[ 94], 20.00th=[ 122], 01:13:03.136 | 30.00th=[ 125], 40.00th=[ 128], 50.00th=[ 129], 60.00th=[ 130], 01:13:03.136 | 70.00th=[ 132], 80.00th=[ 140], 90.00th=[ 165], 95.00th=[ 167], 01:13:03.136 | 99.00th=[ 188], 99.50th=[ 251], 99.90th=[ 305], 99.95th=[ 305], 01:13:03.136 | 99.99th=[ 313] 01:13:03.136 bw ( KiB/s): min=96768, max=185485, per=10.30%, avg=124733.70, stdev=21245.51, samples=20 01:13:03.136 iops : min= 378, max= 724, avg=487.15, stdev=82.91, samples=20 01:13:03.136 lat (msec) : 10=0.12%, 20=0.24%, 50=0.57%, 100=11.98%, 250=86.64% 01:13:03.136 lat (msec) : 500=0.45% 01:13:03.136 cpu : usr=1.70%, sys=1.78%, ctx=6521, majf=0, minf=1 01:13:03.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 01:13:03.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:03.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:13:03.136 issued rwts: total=0,4941,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:03.136 latency : target=0, window=0, percentile=100.00%, depth=64 01:13:03.136 job3: (groupid=0, jobs=1): err= 0: pid=87642: Mon Jul 22 11:10:07 2024 01:13:03.136 write: IOPS=232, BW=58.0MiB/s (60.8MB/s)(593MiB/10228msec); 0 zone resets 01:13:03.136 slat (usec): min=26, max=47253, avg=4177.63, stdev=7589.80 01:13:03.136 clat (msec): min=16, max=484, avg=271.52, stdev=48.52 01:13:03.136 lat (msec): min=16, max=484, avg=275.70, stdev=48.74 01:13:03.136 clat percentiles (msec): 01:13:03.136 | 1.00th=[ 44], 5.00th=[ 232], 10.00th=[ 249], 20.00th=[ 259], 01:13:03.136 | 30.00th=[ 264], 40.00th=[ 268], 50.00th=[ 275], 60.00th=[ 275], 01:13:03.136 | 70.00th=[ 288], 80.00th=[ 296], 90.00th=[ 313], 95.00th=[ 330], 01:13:03.136 | 99.00th=[ 384], 99.50th=[ 435], 99.90th=[ 468], 99.95th=[ 485], 01:13:03.136 | 99.99th=[ 485] 01:13:03.136 bw ( KiB/s): min=51200, max=77668, per=4.88%, avg=59099.25, stdev=5625.76, samples=20 01:13:03.136 iops : min= 200, max= 303, avg=230.70, stdev=21.98, samples=20 01:13:03.136 lat (msec) : 20=0.17%, 50=1.05%, 100=1.77%, 250=8.77%, 500=88.24% 01:13:03.136 cpu : usr=0.53%, sys=0.90%, ctx=3006, majf=0, minf=1 01:13:03.136 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.3% 01:13:03.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:03.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:13:03.136 issued rwts: total=0,2373,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:03.136 latency : target=0, window=0, percentile=100.00%, depth=64 01:13:03.136 job4: (groupid=0, jobs=1): err= 0: pid=87645: Mon Jul 22 11:10:07 2024 01:13:03.136 write: IOPS=573, BW=143MiB/s (150MB/s)(1456MiB/10146msec); 0 zone resets 01:13:03.136 slat (usec): min=15, max=21655, avg=1629.74, stdev=3857.19 01:13:03.136 clat (usec): min=1650, max=309553, avg=109862.31, stdev=84632.46 01:13:03.136 lat (msec): min=2, max=309, avg=111.49, stdev=85.86 01:13:03.136 clat percentiles (msec): 01:13:03.136 | 1.00th=[ 12], 5.00th=[ 42], 10.00th=[ 53], 20.00th=[ 54], 01:13:03.136 | 30.00th=[ 55], 40.00th=[ 55], 50.00th=[ 56], 60.00th=[ 57], 01:13:03.136 | 70.00th=[ 161], 80.00th=[ 167], 90.00th=[ 264], 95.00th=[ 275], 01:13:03.136 | 99.00th=[ 296], 99.50th=[ 300], 99.90th=[ 300], 99.95th=[ 305], 01:13:03.136 | 99.99th=[ 309] 01:13:03.136 bw ( KiB/s): min=55296, max=311296, per=12.17%, avg=147400.60, stdev=103938.81, samples=20 01:13:03.136 iops : min= 216, max= 1216, avg=575.75, stdev=405.96, samples=20 01:13:03.136 lat (msec) : 2=0.02%, 4=0.05%, 10=0.76%, 20=1.34%, 50=3.80% 01:13:03.136 lat (msec) : 100=58.36%, 250=21.11%, 500=14.57% 01:13:03.136 cpu : usr=1.93%, sys=2.25%, ctx=4076, majf=0, minf=1 01:13:03.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 01:13:03.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:03.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:13:03.136 issued rwts: total=0,5822,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:03.136 latency : target=0, window=0, percentile=100.00%, depth=64 01:13:03.136 job5: (groupid=0, jobs=1): err= 0: pid=87647: Mon Jul 22 11:10:07 2024 01:13:03.136 write: IOPS=668, BW=167MiB/s (175MB/s)(1685MiB/10079msec); 0 zone resets 01:13:03.136 slat (usec): min=26, max=46121, avg=1463.46, stdev=2522.44 01:13:03.136 clat (msec): min=15, max=183, avg=94.21, stdev=12.08 01:13:03.136 lat (msec): min=15, max=183, avg=95.68, stdev=12.07 01:13:03.136 clat percentiles (msec): 01:13:03.136 | 1.00th=[ 68], 5.00th=[ 87], 10.00th=[ 88], 20.00th=[ 89], 01:13:03.136 | 30.00th=[ 91], 40.00th=[ 92], 50.00th=[ 93], 60.00th=[ 94], 01:13:03.136 | 70.00th=[ 94], 80.00th=[ 95], 90.00th=[ 97], 95.00th=[ 127], 01:13:03.136 | 99.00th=[ 140], 99.50th=[ 148], 99.90th=[ 167], 99.95th=[ 174], 01:13:03.136 | 99.99th=[ 184] 01:13:03.136 bw ( KiB/s): min=111104, max=192512, per=14.11%, avg=170888.20, stdev=18051.26, samples=20 01:13:03.136 iops : min= 434, max= 752, avg=667.60, stdev=70.55, samples=20 01:13:03.136 lat (msec) : 20=0.04%, 50=0.56%, 100=91.26%, 250=8.13% 01:13:03.136 cpu : usr=2.42%, sys=2.39%, ctx=7371, majf=0, minf=1 01:13:03.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 01:13:03.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:03.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:13:03.136 issued rwts: total=0,6740,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:03.136 latency : target=0, window=0, percentile=100.00%, depth=64 01:13:03.136 job6: (groupid=0, jobs=1): err= 0: pid=87648: Mon Jul 22 11:10:07 2024 01:13:03.136 write: IOPS=471, BW=118MiB/s (124MB/s)(1197MiB/10151msec); 0 zone resets 01:13:03.136 slat (usec): min=18, max=16429, avg=2060.85, stdev=3552.22 01:13:03.136 clat (msec): min=18, max=312, avg=133.57, stdev=22.78 01:13:03.136 lat (msec): min=18, max=312, avg=135.63, stdev=22.94 01:13:03.136 clat percentiles (msec): 01:13:03.136 | 1.00th=[ 47], 5.00th=[ 121], 10.00th=[ 122], 20.00th=[ 124], 01:13:03.136 | 30.00th=[ 127], 40.00th=[ 128], 50.00th=[ 129], 60.00th=[ 130], 01:13:03.136 | 70.00th=[ 132], 80.00th=[ 153], 90.00th=[ 165], 95.00th=[ 167], 01:13:03.136 | 99.00th=[ 188], 99.50th=[ 249], 99.90th=[ 300], 99.95th=[ 300], 01:13:03.136 | 99.99th=[ 313] 01:13:03.136 bw ( KiB/s): min=96768, max=149504, per=9.98%, avg=120922.00, stdev=13925.63, samples=20 01:13:03.136 iops : min= 378, max= 584, avg=472.35, stdev=54.40, samples=20 01:13:03.136 lat (msec) : 20=0.06%, 50=1.00%, 100=1.46%, 250=97.01%, 500=0.46% 01:13:03.136 cpu : usr=1.75%, sys=1.59%, ctx=6111, majf=0, minf=1 01:13:03.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 01:13:03.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:03.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:13:03.136 issued rwts: total=0,4788,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:03.136 latency : target=0, window=0, percentile=100.00%, depth=64 01:13:03.136 job7: (groupid=0, jobs=1): err= 0: pid=87649: Mon Jul 22 11:10:07 2024 01:13:03.136 write: IOPS=233, BW=58.3MiB/s (61.1MB/s)(596MiB/10227msec); 0 zone resets 01:13:03.136 slat (usec): min=22, max=68793, avg=4191.93, stdev=7479.40 01:13:03.136 clat (msec): min=27, max=480, avg=270.23, stdev=38.28 01:13:03.136 lat (msec): min=27, max=480, avg=274.42, stdev=38.16 01:13:03.136 clat percentiles (msec): 01:13:03.136 | 1.00th=[ 70], 5.00th=[ 247], 10.00th=[ 249], 20.00th=[ 259], 01:13:03.136 | 30.00th=[ 264], 40.00th=[ 266], 50.00th=[ 271], 60.00th=[ 275], 01:13:03.136 | 70.00th=[ 275], 80.00th=[ 284], 90.00th=[ 300], 95.00th=[ 317], 01:13:03.136 | 99.00th=[ 380], 99.50th=[ 430], 99.90th=[ 464], 99.95th=[ 481], 01:13:03.136 | 99.99th=[ 481] 01:13:03.136 bw ( KiB/s): min=49152, max=63488, per=4.90%, avg=59411.65, stdev=3453.89, samples=20 01:13:03.136 iops : min= 192, max= 248, avg=232.00, stdev=13.50, samples=20 01:13:03.136 lat (msec) : 50=0.50%, 100=0.84%, 250=9.44%, 500=89.22% 01:13:03.136 cpu : usr=0.50%, sys=0.90%, ctx=2682, majf=0, minf=1 01:13:03.136 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 01:13:03.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:03.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:13:03.136 issued rwts: total=0,2384,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:03.136 latency : target=0, window=0, percentile=100.00%, depth=64 01:13:03.136 job8: (groupid=0, jobs=1): err= 0: pid=87650: Mon Jul 22 11:10:07 2024 01:13:03.136 write: IOPS=234, BW=58.5MiB/s (61.4MB/s)(599MiB/10231msec); 0 zone resets 01:13:03.136 slat (usec): min=24, max=98626, avg=4168.53, stdev=7522.04 01:13:03.136 clat (msec): min=14, max=486, avg=268.98, stdev=37.21 01:13:03.136 lat (msec): min=14, max=486, avg=273.15, stdev=37.04 01:13:03.136 clat percentiles (msec): 01:13:03.136 | 1.00th=[ 70], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 259], 01:13:03.136 | 30.00th=[ 264], 40.00th=[ 266], 50.00th=[ 271], 60.00th=[ 275], 01:13:03.136 | 70.00th=[ 275], 80.00th=[ 284], 90.00th=[ 296], 95.00th=[ 305], 01:13:03.136 | 99.00th=[ 388], 99.50th=[ 435], 99.90th=[ 472], 99.95th=[ 485], 01:13:03.136 | 99.99th=[ 485] 01:13:03.136 bw ( KiB/s): min=55296, max=63488, per=4.93%, avg=59713.05, stdev=2409.19, samples=20 01:13:03.136 iops : min= 216, max= 248, avg=233.15, stdev= 9.40, samples=20 01:13:03.136 lat (msec) : 20=0.17%, 50=0.33%, 100=0.83%, 250=8.64%, 500=90.03% 01:13:03.136 cpu : usr=0.53%, sys=1.04%, ctx=3140, majf=0, minf=1 01:13:03.136 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 01:13:03.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:03.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:13:03.136 issued rwts: total=0,2396,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:03.136 latency : target=0, window=0, percentile=100.00%, depth=64 01:13:03.136 job9: (groupid=0, jobs=1): err= 0: pid=87651: Mon Jul 22 11:10:07 2024 01:13:03.136 write: IOPS=230, BW=57.5MiB/s (60.3MB/s)(588MiB/10229msec); 0 zone resets 01:13:03.136 slat (usec): min=16, max=76959, avg=4250.29, stdev=7675.02 01:13:03.136 clat (msec): min=78, max=475, avg=273.84, stdev=30.40 01:13:03.136 lat (msec): min=78, max=475, avg=278.09, stdev=29.93 01:13:03.136 clat percentiles (msec): 01:13:03.136 | 1.00th=[ 140], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 262], 01:13:03.136 | 30.00th=[ 264], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 275], 01:13:03.136 | 70.00th=[ 279], 80.00th=[ 296], 90.00th=[ 305], 95.00th=[ 309], 01:13:03.137 | 99.00th=[ 376], 99.50th=[ 426], 99.90th=[ 460], 99.95th=[ 477], 01:13:03.137 | 99.99th=[ 477] 01:13:03.137 bw ( KiB/s): min=51200, max=63488, per=4.84%, avg=58592.70, stdev=3425.41, samples=20 01:13:03.137 iops : min= 200, max= 248, avg=228.80, stdev=13.38, samples=20 01:13:03.137 lat (msec) : 100=0.34%, 250=8.92%, 500=90.74% 01:13:03.137 cpu : usr=0.53%, sys=0.79%, ctx=3338, majf=0, minf=1 01:13:03.137 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 01:13:03.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:03.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:13:03.137 issued rwts: total=0,2353,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:03.137 latency : target=0, window=0, percentile=100.00%, depth=64 01:13:03.137 job10: (groupid=0, jobs=1): err= 0: pid=87652: Mon Jul 22 11:10:07 2024 01:13:03.137 write: IOPS=492, BW=123MiB/s (129MB/s)(1260MiB/10231msec); 0 zone resets 01:13:03.137 slat (usec): min=17, max=87372, avg=1973.73, stdev=4960.97 01:13:03.137 clat (msec): min=15, max=481, avg=127.93, stdev=108.07 01:13:03.137 lat (msec): min=15, max=481, avg=129.90, stdev=109.63 01:13:03.137 clat percentiles (msec): 01:13:03.137 | 1.00th=[ 52], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 55], 01:13:03.137 | 30.00th=[ 56], 40.00th=[ 56], 50.00th=[ 56], 60.00th=[ 57], 01:13:03.137 | 70.00th=[ 249], 80.00th=[ 271], 90.00th=[ 288], 95.00th=[ 309], 01:13:03.137 | 99.00th=[ 363], 99.50th=[ 380], 99.90th=[ 447], 99.95th=[ 464], 01:13:03.137 | 99.99th=[ 481] 01:13:03.137 bw ( KiB/s): min=45056, max=294400, per=10.52%, avg=127426.20, stdev=107442.49, samples=20 01:13:03.137 iops : min= 176, max= 1150, avg=497.65, stdev=419.67, samples=20 01:13:03.137 lat (msec) : 20=0.08%, 50=0.54%, 100=67.73%, 250=2.70%, 500=28.96% 01:13:03.137 cpu : usr=1.63%, sys=1.73%, ctx=5887, majf=0, minf=1 01:13:03.137 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 01:13:03.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:03.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:13:03.137 issued rwts: total=0,5038,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:03.137 latency : target=0, window=0, percentile=100.00%, depth=64 01:13:03.137 01:13:03.137 Run status group 0 (all jobs): 01:13:03.137 WRITE: bw=1183MiB/s (1241MB/s), 57.5MiB/s-167MiB/s (60.3MB/s-175MB/s), io=11.8GiB (12.7GB), run=10079-10231msec 01:13:03.137 01:13:03.137 Disk stats (read/write): 01:13:03.137 nvme0n1: ios=50/13284, merge=0/0, ticks=51/1218162, in_queue=1218213, util=98.21% 01:13:03.137 nvme10n1: ios=49/9642, merge=0/0, ticks=50/1211190, in_queue=1211240, util=98.02% 01:13:03.137 nvme1n1: ios=49/9760, merge=0/0, ticks=56/1213759, in_queue=1213815, util=98.44% 01:13:03.137 nvme2n1: ios=49/4624, merge=0/0, ticks=59/1208557, in_queue=1208616, util=98.51% 01:13:03.137 nvme3n1: ios=47/11511, merge=0/0, ticks=43/1213393, in_queue=1213436, util=98.37% 01:13:03.137 nvme4n1: ios=28/13339, merge=0/0, ticks=23/1216226, in_queue=1216249, util=98.35% 01:13:03.137 nvme5n1: ios=0/9449, merge=0/0, ticks=0/1213386, in_queue=1213386, util=98.37% 01:13:03.137 nvme6n1: ios=0/4642, merge=0/0, ticks=0/1207812, in_queue=1207812, util=98.46% 01:13:03.137 nvme7n1: ios=0/4676, merge=0/0, ticks=0/1209680, in_queue=1209680, util=98.80% 01:13:03.137 nvme8n1: ios=0/4577, merge=0/0, ticks=0/1208110, in_queue=1208110, util=98.77% 01:13:03.137 nvme9n1: ios=0/9949, merge=0/0, ticks=0/1208209, in_queue=1208209, util=98.86% 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:13:03.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 01:13:03.137 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 01:13:03.137 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 01:13:03.137 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 01:13:03.137 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:13:03.137 11:10:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 01:13:03.137 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 01:13:03.137 11:10:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 01:13:03.137 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:03.137 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:13:03.137 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:03.137 11:10:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:13:03.137 11:10:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 01:13:03.137 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 01:13:03.137 11:10:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 01:13:03.137 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 01:13:03.137 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:13:03.137 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 01:13:03.137 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 01:13:03.137 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:13:03.137 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 01:13:03.137 11:10:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 01:13:03.137 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:03.137 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:13:03.138 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:03.138 11:10:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:13:03.138 11:10:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 01:13:03.138 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 01:13:03.138 11:10:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 01:13:03.138 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 01:13:03.138 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:13:03.138 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 01:13:03.138 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:13:03.138 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 01:13:03.138 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 01:13:03.138 11:10:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 01:13:03.138 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:03.138 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:13:03.138 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:03.138 11:10:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:13:03.138 11:10:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 01:13:03.138 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 01:13:03.138 11:10:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 01:13:03.138 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 01:13:03.138 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:13:03.138 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 01:13:03.138 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 01:13:03.138 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:13:03.138 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 01:13:03.138 11:10:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 01:13:03.138 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:03.138 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:13:03.138 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:03.138 11:10:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:13:03.138 11:10:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 01:13:03.396 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 01:13:03.396 11:10:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 01:13:03.396 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 01:13:03.396 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:13:03.396 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 01:13:03.396 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:13:03.396 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 01:13:03.396 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 01:13:03.396 11:10:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 01:13:03.396 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:03.396 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:13:03.396 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:03.396 11:10:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:13:03.396 11:10:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 01:13:03.396 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 01:13:03.396 11:10:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 01:13:03.396 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 01:13:03.397 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:13:03.397 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 01:13:03.397 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:13:03.397 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 01:13:03.397 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 01:13:03.397 11:10:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 01:13:03.397 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:03.397 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:13:03.655 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:03.655 11:10:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:13:03.655 11:10:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 01:13:03.655 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 01:13:03.655 11:10:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 01:13:03.655 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 01:13:03.655 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:13:03.655 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 01:13:03.655 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 01:13:03.655 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:13:03.655 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 01:13:03.655 11:10:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 01:13:03.655 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:03.655 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:13:03.655 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:03.655 11:10:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 01:13:03.655 11:10:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 01:13:03.655 11:10:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 01:13:03.655 11:10:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 01:13:03.655 11:10:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 01:13:03.655 11:10:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:13:03.656 11:10:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 01:13:03.656 11:10:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 01:13:03.656 11:10:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:13:03.656 rmmod nvme_tcp 01:13:03.656 rmmod nvme_fabrics 01:13:03.656 rmmod nvme_keyring 01:13:03.950 11:10:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:13:03.950 11:10:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 01:13:03.950 11:10:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 01:13:03.950 11:10:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 86951 ']' 01:13:03.950 11:10:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 86951 01:13:03.950 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 86951 ']' 01:13:03.950 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 86951 01:13:03.950 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 01:13:03.950 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:13:03.950 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86951 01:13:03.950 killing process with pid 86951 01:13:03.950 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:13:03.950 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:13:03.950 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86951' 01:13:03.950 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 86951 01:13:03.950 11:10:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 86951 01:13:04.526 11:10:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:13:04.526 11:10:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:13:04.526 11:10:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:13:04.526 11:10:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:13:04.526 11:10:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 01:13:04.526 11:10:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:13:04.526 11:10:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:13:04.526 11:10:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:13:04.526 11:10:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:13:04.526 01:13:04.526 real 0m50.203s 01:13:04.526 user 2m46.178s 01:13:04.526 sys 0m36.514s 01:13:04.526 11:10:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 01:13:04.526 11:10:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:13:04.526 ************************************ 01:13:04.526 END TEST nvmf_multiconnection 01:13:04.526 ************************************ 01:13:04.784 11:10:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:13:04.784 11:10:09 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 01:13:04.784 11:10:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:13:04.784 11:10:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:13:04.784 11:10:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:13:04.784 ************************************ 01:13:04.784 START TEST nvmf_initiator_timeout 01:13:04.784 ************************************ 01:13:04.784 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 01:13:04.784 * Looking for test storage... 01:13:04.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:13:04.784 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:13:04.784 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 01:13:04.784 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:13:04.784 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:13:04.784 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:13:04.784 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:13:04.784 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:13:04.784 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:13:04.784 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:13:04.784 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:13:04.784 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:13:04.784 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:13:04.784 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:13:04.784 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:13:04.784 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:13:04.784 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:13:04.784 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:13:04.784 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:13:04.784 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:13:04.784 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:13:04.785 Cannot find device "nvmf_tgt_br" 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # true 01:13:04.785 11:10:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:13:05.043 Cannot find device "nvmf_tgt_br2" 01:13:05.043 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # true 01:13:05.043 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:13:05.043 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:13:05.043 Cannot find device "nvmf_tgt_br" 01:13:05.043 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # true 01:13:05.043 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:13:05.043 Cannot find device "nvmf_tgt_br2" 01:13:05.043 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # true 01:13:05.043 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:13:05.043 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:13:05.043 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:13:05.043 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:13:05.043 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 01:13:05.043 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:13:05.043 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:13:05.043 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 01:13:05.043 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:13:05.043 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:13:05.043 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:13:05.043 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:13:05.043 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:13:05.043 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:13:05.043 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:13:05.043 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:13:05.043 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:13:05.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:13:05.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 01:13:05.301 01:13:05.301 --- 10.0.0.2 ping statistics --- 01:13:05.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:13:05.301 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:13:05.301 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:13:05.301 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 01:13:05.301 01:13:05.301 --- 10.0.0.3 ping statistics --- 01:13:05.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:13:05.301 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:13:05.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:13:05.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 01:13:05.301 01:13:05.301 --- 10.0.0.1 ping statistics --- 01:13:05.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:13:05.301 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@433 -- # return 0 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=88027 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 88027 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 88027 ']' 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 01:13:05.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 01:13:05.301 11:10:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:13:05.558 [2024-07-22 11:10:10.530081] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:13:05.558 [2024-07-22 11:10:10.530191] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:13:05.558 [2024-07-22 11:10:10.682263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:13:05.558 [2024-07-22 11:10:10.753106] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:13:05.558 [2024-07-22 11:10:10.753169] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:13:05.558 [2024-07-22 11:10:10.753179] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:13:05.558 [2024-07-22 11:10:10.753188] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:13:05.558 [2024-07-22 11:10:10.753195] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:13:05.558 [2024-07-22 11:10:10.753424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:13:05.558 [2024-07-22 11:10:10.753731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:13:05.558 [2024-07-22 11:10:10.754421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:13:05.558 [2024-07-22 11:10:10.754421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:13:05.816 [2024-07-22 11:10:10.827741] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:13:06.379 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:13:06.379 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 01:13:06.379 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:13:06.379 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 01:13:06.379 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:13:06.379 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:13:06.379 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 01:13:06.379 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:13:06.379 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:06.379 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:13:06.379 Malloc0 01:13:06.379 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:06.379 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 01:13:06.379 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:06.379 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:13:06.379 Delay0 01:13:06.379 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:06.379 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:13:06.379 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:06.379 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:13:06.379 [2024-07-22 11:10:11.504506] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:13:06.379 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:06.379 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:13:06.379 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:06.379 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:13:06.380 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:06.380 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:13:06.380 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:06.380 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:13:06.380 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:06.380 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:13:06.380 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:06.380 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:13:06.380 [2024-07-22 11:10:11.544630] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:13:06.380 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:06.380 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid=7758934d-ca6b-403e-9e5d-3518ecb16acb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 01:13:06.637 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 01:13:06.637 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 01:13:06.637 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:13:06.637 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:13:06.637 11:10:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 01:13:08.531 11:10:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:13:08.531 11:10:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:13:08.531 11:10:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 01:13:08.531 11:10:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:13:08.531 11:10:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:13:08.531 11:10:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 01:13:08.531 11:10:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=88086 01:13:08.531 11:10:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 01:13:08.531 11:10:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 01:13:08.787 [global] 01:13:08.787 thread=1 01:13:08.787 invalidate=1 01:13:08.787 rw=write 01:13:08.787 time_based=1 01:13:08.787 runtime=60 01:13:08.787 ioengine=libaio 01:13:08.787 direct=1 01:13:08.787 bs=4096 01:13:08.787 iodepth=1 01:13:08.787 norandommap=0 01:13:08.787 numjobs=1 01:13:08.787 01:13:08.787 verify_dump=1 01:13:08.787 verify_backlog=512 01:13:08.787 verify_state_save=0 01:13:08.787 do_verify=1 01:13:08.787 verify=crc32c-intel 01:13:08.787 [job0] 01:13:08.787 filename=/dev/nvme0n1 01:13:08.787 Could not set queue depth (nvme0n1) 01:13:08.787 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:13:08.787 fio-3.35 01:13:08.787 Starting 1 thread 01:13:12.059 11:10:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 01:13:12.059 11:10:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:12.059 11:10:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:13:12.059 true 01:13:12.059 11:10:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:12.059 11:10:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 01:13:12.059 11:10:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:12.059 11:10:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:13:12.059 true 01:13:12.059 11:10:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:12.059 11:10:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 01:13:12.059 11:10:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:12.059 11:10:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:13:12.059 true 01:13:12.059 11:10:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:12.059 11:10:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 01:13:12.059 11:10:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:12.059 11:10:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:13:12.059 true 01:13:12.059 11:10:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:12.059 11:10:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 01:13:14.588 11:10:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 01:13:14.588 11:10:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:14.588 11:10:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:13:14.845 true 01:13:14.845 11:10:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:14.845 11:10:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 01:13:14.845 11:10:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:14.845 11:10:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:13:14.845 true 01:13:14.845 11:10:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:14.845 11:10:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 01:13:14.845 11:10:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:14.845 11:10:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:13:14.845 true 01:13:14.845 11:10:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:14.845 11:10:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 01:13:14.845 11:10:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:14.845 11:10:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:13:14.845 true 01:13:14.845 11:10:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:14.845 11:10:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 01:13:14.845 11:10:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 88086 01:14:11.058 01:14:11.058 job0: (groupid=0, jobs=1): err= 0: pid=88118: Mon Jul 22 11:11:14 2024 01:14:11.058 read: IOPS=888, BW=3552KiB/s (3637kB/s)(208MiB/60000msec) 01:14:11.058 slat (usec): min=6, max=10776, avg=10.95, stdev=62.76 01:14:11.058 clat (usec): min=105, max=40644k, avg=953.33, stdev=176081.15 01:14:11.058 lat (usec): min=135, max=40644k, avg=964.28, stdev=176081.15 01:14:11.058 clat percentiles (usec): 01:14:11.058 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 165], 01:14:11.058 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 188], 60.00th=[ 194], 01:14:11.058 | 70.00th=[ 202], 80.00th=[ 212], 90.00th=[ 227], 95.00th=[ 239], 01:14:11.058 | 99.00th=[ 265], 99.50th=[ 281], 99.90th=[ 437], 99.95th=[ 545], 01:14:11.058 | 99.99th=[ 971] 01:14:11.058 write: IOPS=896, BW=3584KiB/s (3670kB/s)(210MiB/60000msec); 0 zone resets 01:14:11.058 slat (usec): min=8, max=684, avg=15.75, stdev= 5.09 01:14:11.058 clat (usec): min=96, max=1675, avg=142.68, stdev=26.91 01:14:11.058 lat (usec): min=108, max=1690, avg=158.43, stdev=28.00 01:14:11.058 clat percentiles (usec): 01:14:11.058 | 1.00th=[ 104], 5.00th=[ 112], 10.00th=[ 116], 20.00th=[ 123], 01:14:11.058 | 30.00th=[ 129], 40.00th=[ 135], 50.00th=[ 141], 60.00th=[ 147], 01:14:11.058 | 70.00th=[ 153], 80.00th=[ 161], 90.00th=[ 172], 95.00th=[ 182], 01:14:11.058 | 99.00th=[ 204], 99.50th=[ 215], 99.90th=[ 289], 99.95th=[ 367], 01:14:11.058 | 99.99th=[ 857] 01:14:11.058 bw ( KiB/s): min= 4096, max=14544, per=100.00%, avg=10817.08, stdev=2096.32, samples=39 01:14:11.058 iops : min= 1024, max= 3636, avg=2704.26, stdev=524.08, samples=39 01:14:11.058 lat (usec) : 100=0.07%, 250=98.66%, 500=1.23%, 750=0.03%, 1000=0.01% 01:14:11.058 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 01:14:11.058 cpu : usr=0.40%, sys=1.79%, ctx=107046, majf=0, minf=2 01:14:11.058 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:14:11.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:11.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:11.058 issued rwts: total=53281,53760,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:11.058 latency : target=0, window=0, percentile=100.00%, depth=1 01:14:11.058 01:14:11.058 Run status group 0 (all jobs): 01:14:11.058 READ: bw=3552KiB/s (3637kB/s), 3552KiB/s-3552KiB/s (3637kB/s-3637kB/s), io=208MiB (218MB), run=60000-60000msec 01:14:11.058 WRITE: bw=3584KiB/s (3670kB/s), 3584KiB/s-3584KiB/s (3670kB/s-3670kB/s), io=210MiB (220MB), run=60000-60000msec 01:14:11.058 01:14:11.058 Disk stats (read/write): 01:14:11.058 nvme0n1: ios=53512/53248, merge=0/0, ticks=10414/7940, in_queue=18354, util=99.60% 01:14:11.058 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:14:11.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 01:14:11.059 nvmf hotplug test: fio successful as expected 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:14:11.059 rmmod nvme_tcp 01:14:11.059 rmmod nvme_fabrics 01:14:11.059 rmmod nvme_keyring 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 88027 ']' 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 88027 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 88027 ']' 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 88027 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88027 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:14:11.059 killing process with pid 88027 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88027' 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 88027 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 88027 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:14:11.059 01:14:11.059 real 1m4.745s 01:14:11.059 user 3m53.503s 01:14:11.059 sys 0m21.640s 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 01:14:11.059 11:11:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:14:11.059 ************************************ 01:14:11.059 END TEST nvmf_initiator_timeout 01:14:11.059 ************************************ 01:14:11.059 11:11:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:14:11.059 11:11:14 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 01:14:11.059 11:11:14 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 01:14:11.059 11:11:14 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 01:14:11.059 11:11:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:14:11.059 11:11:14 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 01:14:11.059 11:11:14 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 01:14:11.059 11:11:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:14:11.059 11:11:14 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 01:14:11.059 11:11:14 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 01:14:11.059 11:11:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:14:11.059 11:11:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:14:11.059 11:11:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:14:11.059 ************************************ 01:14:11.059 START TEST nvmf_identify 01:14:11.059 ************************************ 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 01:14:11.059 * Looking for test storage... 01:14:11.059 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:14:11.059 11:11:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:14:11.060 Cannot find device "nvmf_tgt_br" 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:14:11.060 Cannot find device "nvmf_tgt_br2" 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:14:11.060 Cannot find device "nvmf_tgt_br" 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:14:11.060 Cannot find device "nvmf_tgt_br2" 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:14:11.060 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 01:14:11.060 11:11:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:14:11.060 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:14:11.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:14:11.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 01:14:11.060 01:14:11.060 --- 10.0.0.2 ping statistics --- 01:14:11.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:14:11.060 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:14:11.060 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:14:11.060 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 01:14:11.060 01:14:11.060 --- 10.0.0.3 ping statistics --- 01:14:11.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:14:11.060 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:14:11.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:14:11.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 01:14:11.060 01:14:11.060 --- 10.0.0.1 ping statistics --- 01:14:11.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:14:11.060 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=88949 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 88949 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 88949 ']' 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 01:14:11.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:14:11.060 11:11:15 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:14:11.060 [2024-07-22 11:11:15.347819] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:14:11.060 [2024-07-22 11:11:15.347939] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:14:11.060 [2024-07-22 11:11:15.494946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:14:11.060 [2024-07-22 11:11:15.557734] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:14:11.060 [2024-07-22 11:11:15.557791] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:14:11.060 [2024-07-22 11:11:15.557801] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:14:11.060 [2024-07-22 11:11:15.557809] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:14:11.060 [2024-07-22 11:11:15.557816] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:14:11.060 [2024-07-22 11:11:15.558694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:14:11.060 [2024-07-22 11:11:15.558802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:14:11.060 [2024-07-22 11:11:15.558917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:14:11.060 [2024-07-22 11:11:15.558919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:14:11.060 [2024-07-22 11:11:15.601191] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:14:11.060 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:14:11.060 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 01:14:11.060 11:11:16 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:14:11.060 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 01:14:11.060 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:14:11.060 [2024-07-22 11:11:16.183450] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:14:11.060 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:14:11.060 11:11:16 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 01:14:11.060 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 01:14:11.060 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:14:11.319 11:11:16 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:14:11.319 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 01:14:11.319 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:14:11.319 Malloc0 01:14:11.319 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:14:11.319 11:11:16 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:14:11.319 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 01:14:11.319 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:14:11.319 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:14:11.319 11:11:16 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 01:14:11.319 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 01:14:11.319 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:14:11.319 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:14:11.319 11:11:16 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:14:11.319 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 01:14:11.319 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:14:11.319 [2024-07-22 11:11:16.318126] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:14:11.319 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:14:11.319 11:11:16 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:14:11.319 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 01:14:11.319 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:14:11.319 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:14:11.319 11:11:16 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 01:14:11.319 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 01:14:11.319 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:14:11.319 [ 01:14:11.319 { 01:14:11.319 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 01:14:11.319 "subtype": "Discovery", 01:14:11.319 "listen_addresses": [ 01:14:11.319 { 01:14:11.319 "trtype": "TCP", 01:14:11.319 "adrfam": "IPv4", 01:14:11.319 "traddr": "10.0.0.2", 01:14:11.319 "trsvcid": "4420" 01:14:11.319 } 01:14:11.319 ], 01:14:11.319 "allow_any_host": true, 01:14:11.319 "hosts": [] 01:14:11.319 }, 01:14:11.319 { 01:14:11.319 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:14:11.319 "subtype": "NVMe", 01:14:11.319 "listen_addresses": [ 01:14:11.319 { 01:14:11.319 "trtype": "TCP", 01:14:11.319 "adrfam": "IPv4", 01:14:11.319 "traddr": "10.0.0.2", 01:14:11.319 "trsvcid": "4420" 01:14:11.319 } 01:14:11.319 ], 01:14:11.319 "allow_any_host": true, 01:14:11.319 "hosts": [], 01:14:11.319 "serial_number": "SPDK00000000000001", 01:14:11.319 "model_number": "SPDK bdev Controller", 01:14:11.319 "max_namespaces": 32, 01:14:11.319 "min_cntlid": 1, 01:14:11.319 "max_cntlid": 65519, 01:14:11.319 "namespaces": [ 01:14:11.319 { 01:14:11.319 "nsid": 1, 01:14:11.319 "bdev_name": "Malloc0", 01:14:11.319 "name": "Malloc0", 01:14:11.319 "nguid": "ABCDEF0123456789ABCDEF0123456789", 01:14:11.319 "eui64": "ABCDEF0123456789", 01:14:11.319 "uuid": "dd9cc267-fa6c-46f6-9fa4-cfd5a1226bfd" 01:14:11.319 } 01:14:11.319 ] 01:14:11.319 } 01:14:11.319 ] 01:14:11.319 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:14:11.319 11:11:16 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 01:14:11.319 [2024-07-22 11:11:16.391264] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:14:11.319 [2024-07-22 11:11:16.391313] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88984 ] 01:14:11.319 [2024-07-22 11:11:16.526555] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 01:14:11.319 [2024-07-22 11:11:16.526625] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 01:14:11.319 [2024-07-22 11:11:16.526631] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 01:14:11.319 [2024-07-22 11:11:16.526647] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 01:14:11.320 [2024-07-22 11:11:16.526654] sock.c: 353:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 01:14:11.320 [2024-07-22 11:11:16.526788] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 01:14:11.320 [2024-07-22 11:11:16.526824] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x204c830 0 01:14:11.584 [2024-07-22 11:11:16.541873] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 01:14:11.584 [2024-07-22 11:11:16.541900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 01:14:11.584 [2024-07-22 11:11:16.541906] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 01:14:11.584 [2024-07-22 11:11:16.541910] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 01:14:11.584 [2024-07-22 11:11:16.541957] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.584 [2024-07-22 11:11:16.541963] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.584 [2024-07-22 11:11:16.541968] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x204c830) 01:14:11.584 [2024-07-22 11:11:16.541983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 01:14:11.584 [2024-07-22 11:11:16.542019] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2097fc0, cid 0, qid 0 01:14:11.584 [2024-07-22 11:11:16.549869] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.584 [2024-07-22 11:11:16.549887] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.584 [2024-07-22 11:11:16.549892] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.584 [2024-07-22 11:11:16.549897] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2097fc0) on tqpair=0x204c830 01:14:11.584 [2024-07-22 11:11:16.549907] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 01:14:11.584 [2024-07-22 11:11:16.549915] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 01:14:11.584 [2024-07-22 11:11:16.549921] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 01:14:11.584 [2024-07-22 11:11:16.549939] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.584 [2024-07-22 11:11:16.549944] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.584 [2024-07-22 11:11:16.549947] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x204c830) 01:14:11.584 [2024-07-22 11:11:16.549957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.584 [2024-07-22 11:11:16.549983] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2097fc0, cid 0, qid 0 01:14:11.584 [2024-07-22 11:11:16.550036] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.584 [2024-07-22 11:11:16.550042] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.584 [2024-07-22 11:11:16.550046] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.584 [2024-07-22 11:11:16.550050] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2097fc0) on tqpair=0x204c830 01:14:11.584 [2024-07-22 11:11:16.550055] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 01:14:11.584 [2024-07-22 11:11:16.550062] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 01:14:11.584 [2024-07-22 11:11:16.550069] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.584 [2024-07-22 11:11:16.550073] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.584 [2024-07-22 11:11:16.550076] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x204c830) 01:14:11.584 [2024-07-22 11:11:16.550082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.584 [2024-07-22 11:11:16.550098] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2097fc0, cid 0, qid 0 01:14:11.585 [2024-07-22 11:11:16.550141] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.585 [2024-07-22 11:11:16.550147] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.585 [2024-07-22 11:11:16.550150] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.585 [2024-07-22 11:11:16.550154] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2097fc0) on tqpair=0x204c830 01:14:11.585 [2024-07-22 11:11:16.550160] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 01:14:11.585 [2024-07-22 11:11:16.550167] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 01:14:11.585 [2024-07-22 11:11:16.550174] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.585 [2024-07-22 11:11:16.550178] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.585 [2024-07-22 11:11:16.550181] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x204c830) 01:14:11.585 [2024-07-22 11:11:16.550187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.585 [2024-07-22 11:11:16.550202] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2097fc0, cid 0, qid 0 01:14:11.585 [2024-07-22 11:11:16.550242] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.585 [2024-07-22 11:11:16.550248] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.585 [2024-07-22 11:11:16.550251] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.585 [2024-07-22 11:11:16.550255] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2097fc0) on tqpair=0x204c830 01:14:11.585 [2024-07-22 11:11:16.550260] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 01:14:11.585 [2024-07-22 11:11:16.550269] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.585 [2024-07-22 11:11:16.550273] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.585 [2024-07-22 11:11:16.550276] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x204c830) 01:14:11.585 [2024-07-22 11:11:16.550282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.585 [2024-07-22 11:11:16.550296] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2097fc0, cid 0, qid 0 01:14:11.585 [2024-07-22 11:11:16.550331] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.585 [2024-07-22 11:11:16.550337] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.585 [2024-07-22 11:11:16.550340] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.585 [2024-07-22 11:11:16.550344] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2097fc0) on tqpair=0x204c830 01:14:11.585 [2024-07-22 11:11:16.550349] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 01:14:11.585 [2024-07-22 11:11:16.550354] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 01:14:11.585 [2024-07-22 11:11:16.550361] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 01:14:11.585 [2024-07-22 11:11:16.550466] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 01:14:11.585 [2024-07-22 11:11:16.550471] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 01:14:11.585 [2024-07-22 11:11:16.550480] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.585 [2024-07-22 11:11:16.550484] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.585 [2024-07-22 11:11:16.550488] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x204c830) 01:14:11.585 [2024-07-22 11:11:16.550494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.585 [2024-07-22 11:11:16.550510] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2097fc0, cid 0, qid 0 01:14:11.585 [2024-07-22 11:11:16.550554] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.585 [2024-07-22 11:11:16.550559] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.585 [2024-07-22 11:11:16.550563] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.585 [2024-07-22 11:11:16.550567] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2097fc0) on tqpair=0x204c830 01:14:11.585 [2024-07-22 11:11:16.550572] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 01:14:11.585 [2024-07-22 11:11:16.550580] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.585 [2024-07-22 11:11:16.550584] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.585 [2024-07-22 11:11:16.550588] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x204c830) 01:14:11.585 [2024-07-22 11:11:16.550593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.585 [2024-07-22 11:11:16.550608] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2097fc0, cid 0, qid 0 01:14:11.585 [2024-07-22 11:11:16.550644] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.585 [2024-07-22 11:11:16.550650] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.585 [2024-07-22 11:11:16.550653] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.585 [2024-07-22 11:11:16.550657] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2097fc0) on tqpair=0x204c830 01:14:11.585 [2024-07-22 11:11:16.550661] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 01:14:11.585 [2024-07-22 11:11:16.550666] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 01:14:11.585 [2024-07-22 11:11:16.550673] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 01:14:11.585 [2024-07-22 11:11:16.550682] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 01:14:11.585 [2024-07-22 11:11:16.550691] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.585 [2024-07-22 11:11:16.550694] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x204c830) 01:14:11.585 [2024-07-22 11:11:16.550701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.585 [2024-07-22 11:11:16.550715] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2097fc0, cid 0, qid 0 01:14:11.585 [2024-07-22 11:11:16.550799] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:14:11.585 [2024-07-22 11:11:16.550805] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:14:11.585 [2024-07-22 11:11:16.550809] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:14:11.585 [2024-07-22 11:11:16.550813] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x204c830): datao=0, datal=4096, cccid=0 01:14:11.585 [2024-07-22 11:11:16.550818] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2097fc0) on tqpair(0x204c830): expected_datao=0, payload_size=4096 01:14:11.585 [2024-07-22 11:11:16.550823] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.585 [2024-07-22 11:11:16.550831] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:14:11.585 [2024-07-22 11:11:16.550835] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:14:11.585 [2024-07-22 11:11:16.550845] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.585 [2024-07-22 11:11:16.550861] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.585 [2024-07-22 11:11:16.550865] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.585 [2024-07-22 11:11:16.550869] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2097fc0) on tqpair=0x204c830 01:14:11.585 [2024-07-22 11:11:16.550877] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 01:14:11.585 [2024-07-22 11:11:16.550882] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 01:14:11.585 [2024-07-22 11:11:16.550887] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 01:14:11.585 [2024-07-22 11:11:16.550892] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 01:14:11.585 [2024-07-22 11:11:16.550897] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 01:14:11.585 [2024-07-22 11:11:16.550902] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 01:14:11.585 [2024-07-22 11:11:16.550910] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 01:14:11.585 [2024-07-22 11:11:16.550917] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.585 [2024-07-22 11:11:16.550921] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.585 [2024-07-22 11:11:16.550925] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x204c830) 01:14:11.585 [2024-07-22 11:11:16.550931] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 01:14:11.585 [2024-07-22 11:11:16.550946] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2097fc0, cid 0, qid 0 01:14:11.585 [2024-07-22 11:11:16.551008] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.585 [2024-07-22 11:11:16.551014] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.585 [2024-07-22 11:11:16.551017] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.585 [2024-07-22 11:11:16.551021] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2097fc0) on tqpair=0x204c830 01:14:11.585 [2024-07-22 11:11:16.551032] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.585 [2024-07-22 11:11:16.551036] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.585 [2024-07-22 11:11:16.551039] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x204c830) 01:14:11.585 [2024-07-22 11:11:16.551045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:14:11.585 [2024-07-22 11:11:16.551051] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.585 [2024-07-22 11:11:16.551055] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.585 [2024-07-22 11:11:16.551059] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x204c830) 01:14:11.585 [2024-07-22 11:11:16.551064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:14:11.585 [2024-07-22 11:11:16.551070] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.585 [2024-07-22 11:11:16.551074] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.585 [2024-07-22 11:11:16.551078] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x204c830) 01:14:11.585 [2024-07-22 11:11:16.551083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:14:11.585 [2024-07-22 11:11:16.551089] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.585 [2024-07-22 11:11:16.551093] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.585 [2024-07-22 11:11:16.551096] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204c830) 01:14:11.585 [2024-07-22 11:11:16.551102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:14:11.585 [2024-07-22 11:11:16.551107] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 01:14:11.585 [2024-07-22 11:11:16.551114] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 01:14:11.585 [2024-07-22 11:11:16.551120] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.585 [2024-07-22 11:11:16.551124] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x204c830) 01:14:11.586 [2024-07-22 11:11:16.551130] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.586 [2024-07-22 11:11:16.551147] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2097fc0, cid 0, qid 0 01:14:11.586 [2024-07-22 11:11:16.551152] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2098140, cid 1, qid 0 01:14:11.586 [2024-07-22 11:11:16.551157] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20982c0, cid 2, qid 0 01:14:11.586 [2024-07-22 11:11:16.551161] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2098440, cid 3, qid 0 01:14:11.586 [2024-07-22 11:11:16.551166] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20985c0, cid 4, qid 0 01:14:11.586 [2024-07-22 11:11:16.551233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.586 [2024-07-22 11:11:16.551239] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.586 [2024-07-22 11:11:16.551242] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.586 [2024-07-22 11:11:16.551246] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20985c0) on tqpair=0x204c830 01:14:11.586 [2024-07-22 11:11:16.551254] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 01:14:11.586 [2024-07-22 11:11:16.551259] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 01:14:11.586 [2024-07-22 11:11:16.551268] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.586 [2024-07-22 11:11:16.551272] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x204c830) 01:14:11.586 [2024-07-22 11:11:16.551278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.586 [2024-07-22 11:11:16.551292] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20985c0, cid 4, qid 0 01:14:11.586 [2024-07-22 11:11:16.551340] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:14:11.586 [2024-07-22 11:11:16.551346] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:14:11.586 [2024-07-22 11:11:16.551349] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:14:11.586 [2024-07-22 11:11:16.551353] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x204c830): datao=0, datal=4096, cccid=4 01:14:11.586 [2024-07-22 11:11:16.551358] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20985c0) on tqpair(0x204c830): expected_datao=0, payload_size=4096 01:14:11.586 [2024-07-22 11:11:16.551362] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.586 [2024-07-22 11:11:16.551368] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:14:11.586 [2024-07-22 11:11:16.551372] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:14:11.586 [2024-07-22 11:11:16.551380] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.586 [2024-07-22 11:11:16.551386] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.586 [2024-07-22 11:11:16.551389] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.586 [2024-07-22 11:11:16.551393] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20985c0) on tqpair=0x204c830 01:14:11.586 [2024-07-22 11:11:16.551406] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 01:14:11.586 [2024-07-22 11:11:16.551430] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.586 [2024-07-22 11:11:16.551434] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x204c830) 01:14:11.586 [2024-07-22 11:11:16.551440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.586 [2024-07-22 11:11:16.551447] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.586 [2024-07-22 11:11:16.551451] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.586 [2024-07-22 11:11:16.551454] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x204c830) 01:14:11.586 [2024-07-22 11:11:16.551460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 01:14:11.586 [2024-07-22 11:11:16.551478] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20985c0, cid 4, qid 0 01:14:11.586 [2024-07-22 11:11:16.551483] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2098740, cid 5, qid 0 01:14:11.586 [2024-07-22 11:11:16.551572] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:14:11.586 [2024-07-22 11:11:16.551578] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:14:11.586 [2024-07-22 11:11:16.551581] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:14:11.586 [2024-07-22 11:11:16.551585] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x204c830): datao=0, datal=1024, cccid=4 01:14:11.586 [2024-07-22 11:11:16.551590] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20985c0) on tqpair(0x204c830): expected_datao=0, payload_size=1024 01:14:11.586 [2024-07-22 11:11:16.551595] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.586 [2024-07-22 11:11:16.551601] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:14:11.586 [2024-07-22 11:11:16.551604] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:14:11.586 [2024-07-22 11:11:16.551609] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.586 [2024-07-22 11:11:16.551615] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.586 [2024-07-22 11:11:16.551618] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.586 [2024-07-22 11:11:16.551622] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2098740) on tqpair=0x204c830 01:14:11.586 [2024-07-22 11:11:16.551639] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.586 [2024-07-22 11:11:16.551645] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.586 [2024-07-22 11:11:16.551649] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.586 [2024-07-22 11:11:16.551653] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20985c0) on tqpair=0x204c830 01:14:11.586 [2024-07-22 11:11:16.551669] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.586 [2024-07-22 11:11:16.551674] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x204c830) 01:14:11.586 [2024-07-22 11:11:16.551679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.586 [2024-07-22 11:11:16.551698] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20985c0, cid 4, qid 0 01:14:11.586 [2024-07-22 11:11:16.551752] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:14:11.586 [2024-07-22 11:11:16.551758] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:14:11.586 [2024-07-22 11:11:16.551761] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:14:11.586 [2024-07-22 11:11:16.551765] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x204c830): datao=0, datal=3072, cccid=4 01:14:11.586 [2024-07-22 11:11:16.551770] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20985c0) on tqpair(0x204c830): expected_datao=0, payload_size=3072 01:14:11.586 [2024-07-22 11:11:16.551775] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.586 [2024-07-22 11:11:16.551781] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:14:11.586 [2024-07-22 11:11:16.551785] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:14:11.586 [2024-07-22 11:11:16.551793] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.586 [2024-07-22 11:11:16.551799] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.586 [2024-07-22 11:11:16.551802] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.586 [2024-07-22 11:11:16.551806] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20985c0) on tqpair=0x204c830 01:14:11.586 [2024-07-22 11:11:16.551814] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.586 [2024-07-22 11:11:16.551818] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x204c830) 01:14:11.586 [2024-07-22 11:11:16.551824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.586 [2024-07-22 11:11:16.551842] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20985c0, cid 4, qid 0 01:14:11.586 [2024-07-22 11:11:16.551905] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:14:11.586 [2024-07-22 11:11:16.551911] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:14:11.586 [2024-07-22 11:11:16.551915] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:14:11.586 [2024-07-22 11:11:16.551918] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x204c830): datao=0, datal=8, cccid=4 01:14:11.586 [2024-07-22 11:11:16.551923] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20985c0) on tqpair(0x204c830): expected_datao=0, payload_size=8 01:14:11.586 [2024-07-22 11:11:16.551928] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.586 [2024-07-22 11:11:16.551933] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:14:11.586 [2024-07-22 11:11:16.551937] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:14:11.586 [2024-07-22 11:11:16.551953] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.586 [2024-07-22 11:11:16.551959] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.586 [2024-07-22 11:11:16.551962] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.586 [2024-07-22 11:11:16.551966] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20985c0) on tqpair=0x204c830 01:14:11.586 ===================================================== 01:14:11.586 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 01:14:11.586 ===================================================== 01:14:11.586 Controller Capabilities/Features 01:14:11.586 ================================ 01:14:11.586 Vendor ID: 0000 01:14:11.586 Subsystem Vendor ID: 0000 01:14:11.586 Serial Number: .................... 01:14:11.586 Model Number: ........................................ 01:14:11.586 Firmware Version: 24.09 01:14:11.586 Recommended Arb Burst: 0 01:14:11.586 IEEE OUI Identifier: 00 00 00 01:14:11.586 Multi-path I/O 01:14:11.586 May have multiple subsystem ports: No 01:14:11.586 May have multiple controllers: No 01:14:11.586 Associated with SR-IOV VF: No 01:14:11.586 Max Data Transfer Size: 131072 01:14:11.586 Max Number of Namespaces: 0 01:14:11.586 Max Number of I/O Queues: 1024 01:14:11.586 NVMe Specification Version (VS): 1.3 01:14:11.586 NVMe Specification Version (Identify): 1.3 01:14:11.586 Maximum Queue Entries: 128 01:14:11.586 Contiguous Queues Required: Yes 01:14:11.586 Arbitration Mechanisms Supported 01:14:11.586 Weighted Round Robin: Not Supported 01:14:11.586 Vendor Specific: Not Supported 01:14:11.586 Reset Timeout: 15000 ms 01:14:11.586 Doorbell Stride: 4 bytes 01:14:11.586 NVM Subsystem Reset: Not Supported 01:14:11.586 Command Sets Supported 01:14:11.586 NVM Command Set: Supported 01:14:11.586 Boot Partition: Not Supported 01:14:11.586 Memory Page Size Minimum: 4096 bytes 01:14:11.586 Memory Page Size Maximum: 4096 bytes 01:14:11.586 Persistent Memory Region: Not Supported 01:14:11.586 Optional Asynchronous Events Supported 01:14:11.586 Namespace Attribute Notices: Not Supported 01:14:11.586 Firmware Activation Notices: Not Supported 01:14:11.586 ANA Change Notices: Not Supported 01:14:11.586 PLE Aggregate Log Change Notices: Not Supported 01:14:11.586 LBA Status Info Alert Notices: Not Supported 01:14:11.586 EGE Aggregate Log Change Notices: Not Supported 01:14:11.586 Normal NVM Subsystem Shutdown event: Not Supported 01:14:11.586 Zone Descriptor Change Notices: Not Supported 01:14:11.586 Discovery Log Change Notices: Supported 01:14:11.586 Controller Attributes 01:14:11.587 128-bit Host Identifier: Not Supported 01:14:11.587 Non-Operational Permissive Mode: Not Supported 01:14:11.587 NVM Sets: Not Supported 01:14:11.587 Read Recovery Levels: Not Supported 01:14:11.587 Endurance Groups: Not Supported 01:14:11.587 Predictable Latency Mode: Not Supported 01:14:11.587 Traffic Based Keep ALive: Not Supported 01:14:11.587 Namespace Granularity: Not Supported 01:14:11.587 SQ Associations: Not Supported 01:14:11.587 UUID List: Not Supported 01:14:11.587 Multi-Domain Subsystem: Not Supported 01:14:11.587 Fixed Capacity Management: Not Supported 01:14:11.587 Variable Capacity Management: Not Supported 01:14:11.587 Delete Endurance Group: Not Supported 01:14:11.587 Delete NVM Set: Not Supported 01:14:11.587 Extended LBA Formats Supported: Not Supported 01:14:11.587 Flexible Data Placement Supported: Not Supported 01:14:11.587 01:14:11.587 Controller Memory Buffer Support 01:14:11.587 ================================ 01:14:11.587 Supported: No 01:14:11.587 01:14:11.587 Persistent Memory Region Support 01:14:11.587 ================================ 01:14:11.587 Supported: No 01:14:11.587 01:14:11.587 Admin Command Set Attributes 01:14:11.587 ============================ 01:14:11.587 Security Send/Receive: Not Supported 01:14:11.587 Format NVM: Not Supported 01:14:11.587 Firmware Activate/Download: Not Supported 01:14:11.587 Namespace Management: Not Supported 01:14:11.587 Device Self-Test: Not Supported 01:14:11.587 Directives: Not Supported 01:14:11.587 NVMe-MI: Not Supported 01:14:11.587 Virtualization Management: Not Supported 01:14:11.587 Doorbell Buffer Config: Not Supported 01:14:11.587 Get LBA Status Capability: Not Supported 01:14:11.587 Command & Feature Lockdown Capability: Not Supported 01:14:11.587 Abort Command Limit: 1 01:14:11.587 Async Event Request Limit: 4 01:14:11.587 Number of Firmware Slots: N/A 01:14:11.587 Firmware Slot 1 Read-Only: N/A 01:14:11.587 Firmware Activation Without Reset: N/A 01:14:11.587 Multiple Update Detection Support: N/A 01:14:11.587 Firmware Update Granularity: No Information Provided 01:14:11.587 Per-Namespace SMART Log: No 01:14:11.587 Asymmetric Namespace Access Log Page: Not Supported 01:14:11.587 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 01:14:11.587 Command Effects Log Page: Not Supported 01:14:11.587 Get Log Page Extended Data: Supported 01:14:11.587 Telemetry Log Pages: Not Supported 01:14:11.587 Persistent Event Log Pages: Not Supported 01:14:11.587 Supported Log Pages Log Page: May Support 01:14:11.587 Commands Supported & Effects Log Page: Not Supported 01:14:11.587 Feature Identifiers & Effects Log Page:May Support 01:14:11.587 NVMe-MI Commands & Effects Log Page: May Support 01:14:11.587 Data Area 4 for Telemetry Log: Not Supported 01:14:11.587 Error Log Page Entries Supported: 128 01:14:11.587 Keep Alive: Not Supported 01:14:11.587 01:14:11.587 NVM Command Set Attributes 01:14:11.587 ========================== 01:14:11.587 Submission Queue Entry Size 01:14:11.587 Max: 1 01:14:11.587 Min: 1 01:14:11.587 Completion Queue Entry Size 01:14:11.587 Max: 1 01:14:11.587 Min: 1 01:14:11.587 Number of Namespaces: 0 01:14:11.587 Compare Command: Not Supported 01:14:11.587 Write Uncorrectable Command: Not Supported 01:14:11.587 Dataset Management Command: Not Supported 01:14:11.587 Write Zeroes Command: Not Supported 01:14:11.587 Set Features Save Field: Not Supported 01:14:11.587 Reservations: Not Supported 01:14:11.587 Timestamp: Not Supported 01:14:11.587 Copy: Not Supported 01:14:11.587 Volatile Write Cache: Not Present 01:14:11.587 Atomic Write Unit (Normal): 1 01:14:11.587 Atomic Write Unit (PFail): 1 01:14:11.587 Atomic Compare & Write Unit: 1 01:14:11.587 Fused Compare & Write: Supported 01:14:11.587 Scatter-Gather List 01:14:11.587 SGL Command Set: Supported 01:14:11.587 SGL Keyed: Supported 01:14:11.587 SGL Bit Bucket Descriptor: Not Supported 01:14:11.587 SGL Metadata Pointer: Not Supported 01:14:11.587 Oversized SGL: Not Supported 01:14:11.587 SGL Metadata Address: Not Supported 01:14:11.587 SGL Offset: Supported 01:14:11.587 Transport SGL Data Block: Not Supported 01:14:11.587 Replay Protected Memory Block: Not Supported 01:14:11.587 01:14:11.587 Firmware Slot Information 01:14:11.587 ========================= 01:14:11.587 Active slot: 0 01:14:11.587 01:14:11.587 01:14:11.587 Error Log 01:14:11.587 ========= 01:14:11.587 01:14:11.587 Active Namespaces 01:14:11.587 ================= 01:14:11.587 Discovery Log Page 01:14:11.587 ================== 01:14:11.587 Generation Counter: 2 01:14:11.587 Number of Records: 2 01:14:11.587 Record Format: 0 01:14:11.587 01:14:11.587 Discovery Log Entry 0 01:14:11.587 ---------------------- 01:14:11.587 Transport Type: 3 (TCP) 01:14:11.587 Address Family: 1 (IPv4) 01:14:11.587 Subsystem Type: 3 (Current Discovery Subsystem) 01:14:11.587 Entry Flags: 01:14:11.587 Duplicate Returned Information: 1 01:14:11.587 Explicit Persistent Connection Support for Discovery: 1 01:14:11.587 Transport Requirements: 01:14:11.587 Secure Channel: Not Required 01:14:11.587 Port ID: 0 (0x0000) 01:14:11.587 Controller ID: 65535 (0xffff) 01:14:11.587 Admin Max SQ Size: 128 01:14:11.587 Transport Service Identifier: 4420 01:14:11.587 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 01:14:11.587 Transport Address: 10.0.0.2 01:14:11.587 Discovery Log Entry 1 01:14:11.587 ---------------------- 01:14:11.587 Transport Type: 3 (TCP) 01:14:11.587 Address Family: 1 (IPv4) 01:14:11.587 Subsystem Type: 2 (NVM Subsystem) 01:14:11.587 Entry Flags: 01:14:11.587 Duplicate Returned Information: 0 01:14:11.587 Explicit Persistent Connection Support for Discovery: 0 01:14:11.587 Transport Requirements: 01:14:11.587 Secure Channel: Not Required 01:14:11.587 Port ID: 0 (0x0000) 01:14:11.587 Controller ID: 65535 (0xffff) 01:14:11.587 Admin Max SQ Size: 128 01:14:11.587 Transport Service Identifier: 4420 01:14:11.587 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 01:14:11.587 Transport Address: 10.0.0.2 [2024-07-22 11:11:16.552056] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 01:14:11.587 [2024-07-22 11:11:16.552067] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2097fc0) on tqpair=0x204c830 01:14:11.587 [2024-07-22 11:11:16.552074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:14:11.587 [2024-07-22 11:11:16.552079] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2098140) on tqpair=0x204c830 01:14:11.587 [2024-07-22 11:11:16.552084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:14:11.587 [2024-07-22 11:11:16.552089] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20982c0) on tqpair=0x204c830 01:14:11.587 [2024-07-22 11:11:16.552094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:14:11.587 [2024-07-22 11:11:16.552099] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2098440) on tqpair=0x204c830 01:14:11.587 [2024-07-22 11:11:16.552103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:14:11.587 [2024-07-22 11:11:16.552111] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.587 [2024-07-22 11:11:16.552115] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.587 [2024-07-22 11:11:16.552118] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204c830) 01:14:11.587 [2024-07-22 11:11:16.552125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.587 [2024-07-22 11:11:16.552143] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2098440, cid 3, qid 0 01:14:11.587 [2024-07-22 11:11:16.552180] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.587 [2024-07-22 11:11:16.552185] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.587 [2024-07-22 11:11:16.552189] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.587 [2024-07-22 11:11:16.552193] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2098440) on tqpair=0x204c830 01:14:11.587 [2024-07-22 11:11:16.552199] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.587 [2024-07-22 11:11:16.552203] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.587 [2024-07-22 11:11:16.552207] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204c830) 01:14:11.587 [2024-07-22 11:11:16.552213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.587 [2024-07-22 11:11:16.552230] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2098440, cid 3, qid 0 01:14:11.587 [2024-07-22 11:11:16.552286] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.587 [2024-07-22 11:11:16.552292] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.587 [2024-07-22 11:11:16.552296] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.587 [2024-07-22 11:11:16.552299] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2098440) on tqpair=0x204c830 01:14:11.587 [2024-07-22 11:11:16.552307] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 01:14:11.587 [2024-07-22 11:11:16.552312] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 01:14:11.588 [2024-07-22 11:11:16.552321] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.552325] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.552328] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204c830) 01:14:11.588 [2024-07-22 11:11:16.552334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.588 [2024-07-22 11:11:16.552348] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2098440, cid 3, qid 0 01:14:11.588 [2024-07-22 11:11:16.552386] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.588 [2024-07-22 11:11:16.552392] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.588 [2024-07-22 11:11:16.552395] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.552399] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2098440) on tqpair=0x204c830 01:14:11.588 [2024-07-22 11:11:16.552408] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.552412] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.552416] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204c830) 01:14:11.588 [2024-07-22 11:11:16.552422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.588 [2024-07-22 11:11:16.552436] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2098440, cid 3, qid 0 01:14:11.588 [2024-07-22 11:11:16.552479] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.588 [2024-07-22 11:11:16.552485] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.588 [2024-07-22 11:11:16.552488] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.552492] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2098440) on tqpair=0x204c830 01:14:11.588 [2024-07-22 11:11:16.552500] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.552504] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.552508] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204c830) 01:14:11.588 [2024-07-22 11:11:16.552514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.588 [2024-07-22 11:11:16.552528] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2098440, cid 3, qid 0 01:14:11.588 [2024-07-22 11:11:16.552575] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.588 [2024-07-22 11:11:16.552581] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.588 [2024-07-22 11:11:16.552585] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.552588] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2098440) on tqpair=0x204c830 01:14:11.588 [2024-07-22 11:11:16.552597] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.552601] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.552604] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204c830) 01:14:11.588 [2024-07-22 11:11:16.552610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.588 [2024-07-22 11:11:16.552624] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2098440, cid 3, qid 0 01:14:11.588 [2024-07-22 11:11:16.552664] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.588 [2024-07-22 11:11:16.552670] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.588 [2024-07-22 11:11:16.552673] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.552677] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2098440) on tqpair=0x204c830 01:14:11.588 [2024-07-22 11:11:16.552685] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.552690] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.552693] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204c830) 01:14:11.588 [2024-07-22 11:11:16.552699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.588 [2024-07-22 11:11:16.552713] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2098440, cid 3, qid 0 01:14:11.588 [2024-07-22 11:11:16.552757] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.588 [2024-07-22 11:11:16.552763] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.588 [2024-07-22 11:11:16.552766] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.552770] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2098440) on tqpair=0x204c830 01:14:11.588 [2024-07-22 11:11:16.552779] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.552783] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.552786] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204c830) 01:14:11.588 [2024-07-22 11:11:16.552792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.588 [2024-07-22 11:11:16.552805] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2098440, cid 3, qid 0 01:14:11.588 [2024-07-22 11:11:16.552841] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.588 [2024-07-22 11:11:16.552858] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.588 [2024-07-22 11:11:16.552862] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.552866] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2098440) on tqpair=0x204c830 01:14:11.588 [2024-07-22 11:11:16.552874] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.552878] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.552882] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204c830) 01:14:11.588 [2024-07-22 11:11:16.552888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.588 [2024-07-22 11:11:16.552903] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2098440, cid 3, qid 0 01:14:11.588 [2024-07-22 11:11:16.552943] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.588 [2024-07-22 11:11:16.552949] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.588 [2024-07-22 11:11:16.552952] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.552956] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2098440) on tqpair=0x204c830 01:14:11.588 [2024-07-22 11:11:16.552965] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.552969] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.552972] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204c830) 01:14:11.588 [2024-07-22 11:11:16.552978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.588 [2024-07-22 11:11:16.552993] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2098440, cid 3, qid 0 01:14:11.588 [2024-07-22 11:11:16.553030] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.588 [2024-07-22 11:11:16.553036] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.588 [2024-07-22 11:11:16.553040] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.553044] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2098440) on tqpair=0x204c830 01:14:11.588 [2024-07-22 11:11:16.553052] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.553056] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.553059] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204c830) 01:14:11.588 [2024-07-22 11:11:16.553065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.588 [2024-07-22 11:11:16.553079] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2098440, cid 3, qid 0 01:14:11.588 [2024-07-22 11:11:16.553128] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.588 [2024-07-22 11:11:16.553134] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.588 [2024-07-22 11:11:16.553138] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.553142] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2098440) on tqpair=0x204c830 01:14:11.588 [2024-07-22 11:11:16.553150] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.553154] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.553158] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204c830) 01:14:11.588 [2024-07-22 11:11:16.553163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.588 [2024-07-22 11:11:16.553178] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2098440, cid 3, qid 0 01:14:11.588 [2024-07-22 11:11:16.553222] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.588 [2024-07-22 11:11:16.553227] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.588 [2024-07-22 11:11:16.553231] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.553235] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2098440) on tqpair=0x204c830 01:14:11.588 [2024-07-22 11:11:16.553243] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.553247] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.553251] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204c830) 01:14:11.588 [2024-07-22 11:11:16.553257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.588 [2024-07-22 11:11:16.553271] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2098440, cid 3, qid 0 01:14:11.588 [2024-07-22 11:11:16.553310] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.588 [2024-07-22 11:11:16.553316] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.588 [2024-07-22 11:11:16.553319] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.553323] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2098440) on tqpair=0x204c830 01:14:11.588 [2024-07-22 11:11:16.553331] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.553335] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.553339] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204c830) 01:14:11.588 [2024-07-22 11:11:16.553345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.588 [2024-07-22 11:11:16.553359] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2098440, cid 3, qid 0 01:14:11.588 [2024-07-22 11:11:16.553404] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.588 [2024-07-22 11:11:16.553409] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.588 [2024-07-22 11:11:16.553413] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.553417] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2098440) on tqpair=0x204c830 01:14:11.588 [2024-07-22 11:11:16.553425] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.553429] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.588 [2024-07-22 11:11:16.553433] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204c830) 01:14:11.588 [2024-07-22 11:11:16.553438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.589 [2024-07-22 11:11:16.553452] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2098440, cid 3, qid 0 01:14:11.589 [2024-07-22 11:11:16.553488] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.589 [2024-07-22 11:11:16.553494] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.589 [2024-07-22 11:11:16.553497] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.589 [2024-07-22 11:11:16.553501] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2098440) on tqpair=0x204c830 01:14:11.589 [2024-07-22 11:11:16.553510] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.589 [2024-07-22 11:11:16.553514] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.589 [2024-07-22 11:11:16.553517] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204c830) 01:14:11.589 [2024-07-22 11:11:16.553523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.589 [2024-07-22 11:11:16.553537] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2098440, cid 3, qid 0 01:14:11.589 [2024-07-22 11:11:16.553588] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.589 [2024-07-22 11:11:16.553594] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.589 [2024-07-22 11:11:16.553597] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.589 [2024-07-22 11:11:16.553601] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2098440) on tqpair=0x204c830 01:14:11.589 [2024-07-22 11:11:16.553609] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.589 [2024-07-22 11:11:16.553613] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.589 [2024-07-22 11:11:16.553617] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204c830) 01:14:11.589 [2024-07-22 11:11:16.553623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.589 [2024-07-22 11:11:16.553637] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2098440, cid 3, qid 0 01:14:11.589 [2024-07-22 11:11:16.553672] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.589 [2024-07-22 11:11:16.553678] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.589 [2024-07-22 11:11:16.553682] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.589 [2024-07-22 11:11:16.553686] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2098440) on tqpair=0x204c830 01:14:11.589 [2024-07-22 11:11:16.553694] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.589 [2024-07-22 11:11:16.553698] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.589 [2024-07-22 11:11:16.553702] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204c830) 01:14:11.589 [2024-07-22 11:11:16.553708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.589 [2024-07-22 11:11:16.553722] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2098440, cid 3, qid 0 01:14:11.589 [2024-07-22 11:11:16.553768] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.589 [2024-07-22 11:11:16.553773] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.589 [2024-07-22 11:11:16.553777] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.589 [2024-07-22 11:11:16.553780] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2098440) on tqpair=0x204c830 01:14:11.589 [2024-07-22 11:11:16.553789] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.589 [2024-07-22 11:11:16.553793] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.589 [2024-07-22 11:11:16.553796] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204c830) 01:14:11.589 [2024-07-22 11:11:16.553802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.589 [2024-07-22 11:11:16.553816] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2098440, cid 3, qid 0 01:14:11.589 [2024-07-22 11:11:16.557867] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.589 [2024-07-22 11:11:16.557889] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.589 [2024-07-22 11:11:16.557893] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.589 [2024-07-22 11:11:16.557897] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2098440) on tqpair=0x204c830 01:14:11.589 [2024-07-22 11:11:16.557907] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.589 [2024-07-22 11:11:16.557912] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.589 [2024-07-22 11:11:16.557916] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204c830) 01:14:11.589 [2024-07-22 11:11:16.557923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.589 [2024-07-22 11:11:16.557942] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2098440, cid 3, qid 0 01:14:11.589 [2024-07-22 11:11:16.557994] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.589 [2024-07-22 11:11:16.557999] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.589 [2024-07-22 11:11:16.558003] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.589 [2024-07-22 11:11:16.558007] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2098440) on tqpair=0x204c830 01:14:11.589 [2024-07-22 11:11:16.558014] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 01:14:11.589 01:14:11.589 11:11:16 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 01:14:11.589 [2024-07-22 11:11:16.604312] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:14:11.589 [2024-07-22 11:11:16.604362] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88992 ] 01:14:11.589 [2024-07-22 11:11:16.741359] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 01:14:11.589 [2024-07-22 11:11:16.741424] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 01:14:11.589 [2024-07-22 11:11:16.741429] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 01:14:11.589 [2024-07-22 11:11:16.741445] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 01:14:11.589 [2024-07-22 11:11:16.741452] sock.c: 353:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 01:14:11.589 [2024-07-22 11:11:16.741583] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 01:14:11.589 [2024-07-22 11:11:16.741620] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1a40830 0 01:14:11.589 [2024-07-22 11:11:16.748869] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 01:14:11.589 [2024-07-22 11:11:16.748888] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 01:14:11.589 [2024-07-22 11:11:16.748893] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 01:14:11.589 [2024-07-22 11:11:16.748897] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 01:14:11.589 [2024-07-22 11:11:16.748939] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.589 [2024-07-22 11:11:16.748945] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.589 [2024-07-22 11:11:16.748949] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a40830) 01:14:11.589 [2024-07-22 11:11:16.748962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 01:14:11.589 [2024-07-22 11:11:16.748990] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8bfc0, cid 0, qid 0 01:14:11.589 [2024-07-22 11:11:16.756870] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.589 [2024-07-22 11:11:16.756881] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.589 [2024-07-22 11:11:16.756885] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.589 [2024-07-22 11:11:16.756890] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8bfc0) on tqpair=0x1a40830 01:14:11.589 [2024-07-22 11:11:16.756901] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 01:14:11.589 [2024-07-22 11:11:16.756908] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 01:14:11.589 [2024-07-22 11:11:16.756914] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 01:14:11.589 [2024-07-22 11:11:16.756930] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.589 [2024-07-22 11:11:16.756934] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.589 [2024-07-22 11:11:16.756938] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a40830) 01:14:11.589 [2024-07-22 11:11:16.756946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.589 [2024-07-22 11:11:16.756970] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8bfc0, cid 0, qid 0 01:14:11.589 [2024-07-22 11:11:16.757019] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.589 [2024-07-22 11:11:16.757026] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.589 [2024-07-22 11:11:16.757030] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.589 [2024-07-22 11:11:16.757034] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8bfc0) on tqpair=0x1a40830 01:14:11.589 [2024-07-22 11:11:16.757039] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 01:14:11.589 [2024-07-22 11:11:16.757046] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 01:14:11.589 [2024-07-22 11:11:16.757052] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.589 [2024-07-22 11:11:16.757056] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.589 [2024-07-22 11:11:16.757060] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a40830) 01:14:11.589 [2024-07-22 11:11:16.757066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.589 [2024-07-22 11:11:16.757082] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8bfc0, cid 0, qid 0 01:14:11.589 [2024-07-22 11:11:16.757131] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.589 [2024-07-22 11:11:16.757137] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.589 [2024-07-22 11:11:16.757141] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.589 [2024-07-22 11:11:16.757145] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8bfc0) on tqpair=0x1a40830 01:14:11.589 [2024-07-22 11:11:16.757151] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 01:14:11.589 [2024-07-22 11:11:16.757158] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 01:14:11.589 [2024-07-22 11:11:16.757165] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.589 [2024-07-22 11:11:16.757169] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.589 [2024-07-22 11:11:16.757172] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a40830) 01:14:11.589 [2024-07-22 11:11:16.757178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.589 [2024-07-22 11:11:16.757194] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8bfc0, cid 0, qid 0 01:14:11.589 [2024-07-22 11:11:16.757240] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.589 [2024-07-22 11:11:16.757246] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.589 [2024-07-22 11:11:16.757250] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.589 [2024-07-22 11:11:16.757254] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8bfc0) on tqpair=0x1a40830 01:14:11.589 [2024-07-22 11:11:16.757259] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 01:14:11.589 [2024-07-22 11:11:16.757267] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.589 [2024-07-22 11:11:16.757271] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.589 [2024-07-22 11:11:16.757275] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a40830) 01:14:11.589 [2024-07-22 11:11:16.757281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.589 [2024-07-22 11:11:16.757297] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8bfc0, cid 0, qid 0 01:14:11.589 [2024-07-22 11:11:16.757333] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.589 [2024-07-22 11:11:16.757338] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.590 [2024-07-22 11:11:16.757342] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.590 [2024-07-22 11:11:16.757346] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8bfc0) on tqpair=0x1a40830 01:14:11.590 [2024-07-22 11:11:16.757351] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 01:14:11.590 [2024-07-22 11:11:16.757356] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 01:14:11.590 [2024-07-22 11:11:16.757363] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 01:14:11.590 [2024-07-22 11:11:16.757468] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 01:14:11.590 [2024-07-22 11:11:16.757472] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 01:14:11.590 [2024-07-22 11:11:16.757481] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.590 [2024-07-22 11:11:16.757485] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.590 [2024-07-22 11:11:16.757488] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a40830) 01:14:11.590 [2024-07-22 11:11:16.757494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.590 [2024-07-22 11:11:16.757509] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8bfc0, cid 0, qid 0 01:14:11.590 [2024-07-22 11:11:16.757555] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.590 [2024-07-22 11:11:16.757561] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.590 [2024-07-22 11:11:16.757565] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.590 [2024-07-22 11:11:16.757569] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8bfc0) on tqpair=0x1a40830 01:14:11.590 [2024-07-22 11:11:16.757573] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 01:14:11.590 [2024-07-22 11:11:16.757582] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.590 [2024-07-22 11:11:16.757586] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.590 [2024-07-22 11:11:16.757589] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a40830) 01:14:11.590 [2024-07-22 11:11:16.757595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.590 [2024-07-22 11:11:16.757610] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8bfc0, cid 0, qid 0 01:14:11.590 [2024-07-22 11:11:16.757656] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.590 [2024-07-22 11:11:16.757662] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.590 [2024-07-22 11:11:16.757666] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.590 [2024-07-22 11:11:16.757670] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8bfc0) on tqpair=0x1a40830 01:14:11.590 [2024-07-22 11:11:16.757674] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 01:14:11.590 [2024-07-22 11:11:16.757679] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 01:14:11.590 [2024-07-22 11:11:16.757686] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 01:14:11.590 [2024-07-22 11:11:16.757695] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 01:14:11.590 [2024-07-22 11:11:16.757704] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.590 [2024-07-22 11:11:16.757708] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a40830) 01:14:11.590 [2024-07-22 11:11:16.757714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.590 [2024-07-22 11:11:16.757729] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8bfc0, cid 0, qid 0 01:14:11.590 [2024-07-22 11:11:16.757811] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:14:11.590 [2024-07-22 11:11:16.757816] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:14:11.590 [2024-07-22 11:11:16.757820] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:14:11.590 [2024-07-22 11:11:16.757824] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a40830): datao=0, datal=4096, cccid=0 01:14:11.590 [2024-07-22 11:11:16.757829] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a8bfc0) on tqpair(0x1a40830): expected_datao=0, payload_size=4096 01:14:11.590 [2024-07-22 11:11:16.757834] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.590 [2024-07-22 11:11:16.757841] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:14:11.590 [2024-07-22 11:11:16.757845] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:14:11.590 [2024-07-22 11:11:16.757867] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.590 [2024-07-22 11:11:16.757872] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.590 [2024-07-22 11:11:16.757876] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.590 [2024-07-22 11:11:16.757880] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8bfc0) on tqpair=0x1a40830 01:14:11.590 [2024-07-22 11:11:16.757888] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 01:14:11.590 [2024-07-22 11:11:16.757893] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 01:14:11.590 [2024-07-22 11:11:16.757898] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 01:14:11.590 [2024-07-22 11:11:16.757903] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 01:14:11.590 [2024-07-22 11:11:16.757907] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 01:14:11.590 [2024-07-22 11:11:16.757913] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 01:14:11.590 [2024-07-22 11:11:16.757921] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 01:14:11.590 [2024-07-22 11:11:16.757928] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.590 [2024-07-22 11:11:16.757932] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.590 [2024-07-22 11:11:16.757936] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a40830) 01:14:11.590 [2024-07-22 11:11:16.757942] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 01:14:11.590 [2024-07-22 11:11:16.757959] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8bfc0, cid 0, qid 0 01:14:11.590 [2024-07-22 11:11:16.758005] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.590 [2024-07-22 11:11:16.758011] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.590 [2024-07-22 11:11:16.758014] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.590 [2024-07-22 11:11:16.758018] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8bfc0) on tqpair=0x1a40830 01:14:11.590 [2024-07-22 11:11:16.758028] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.590 [2024-07-22 11:11:16.758032] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.590 [2024-07-22 11:11:16.758035] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a40830) 01:14:11.590 [2024-07-22 11:11:16.758041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:14:11.590 [2024-07-22 11:11:16.758047] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.590 [2024-07-22 11:11:16.758051] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.590 [2024-07-22 11:11:16.758055] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1a40830) 01:14:11.590 [2024-07-22 11:11:16.758060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:14:11.590 [2024-07-22 11:11:16.758066] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.590 [2024-07-22 11:11:16.758070] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.590 [2024-07-22 11:11:16.758073] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1a40830) 01:14:11.590 [2024-07-22 11:11:16.758079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:14:11.590 [2024-07-22 11:11:16.758085] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.590 [2024-07-22 11:11:16.758088] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.590 [2024-07-22 11:11:16.758092] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a40830) 01:14:11.590 [2024-07-22 11:11:16.758097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:14:11.590 [2024-07-22 11:11:16.758102] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 01:14:11.590 [2024-07-22 11:11:16.758110] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 01:14:11.590 [2024-07-22 11:11:16.758116] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.590 [2024-07-22 11:11:16.758119] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a40830) 01:14:11.590 [2024-07-22 11:11:16.758125] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.590 [2024-07-22 11:11:16.758142] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8bfc0, cid 0, qid 0 01:14:11.590 [2024-07-22 11:11:16.758148] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8c140, cid 1, qid 0 01:14:11.590 [2024-07-22 11:11:16.758152] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8c2c0, cid 2, qid 0 01:14:11.590 [2024-07-22 11:11:16.758157] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8c440, cid 3, qid 0 01:14:11.590 [2024-07-22 11:11:16.758161] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8c5c0, cid 4, qid 0 01:14:11.590 [2024-07-22 11:11:16.758229] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.590 [2024-07-22 11:11:16.758235] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.590 [2024-07-22 11:11:16.758239] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.590 [2024-07-22 11:11:16.758242] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8c5c0) on tqpair=0x1a40830 01:14:11.590 [2024-07-22 11:11:16.758250] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 01:14:11.590 [2024-07-22 11:11:16.758256] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 01:14:11.590 [2024-07-22 11:11:16.758264] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 01:14:11.590 [2024-07-22 11:11:16.758270] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 01:14:11.590 [2024-07-22 11:11:16.758276] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.590 [2024-07-22 11:11:16.758280] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.590 [2024-07-22 11:11:16.758283] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a40830) 01:14:11.590 [2024-07-22 11:11:16.758289] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:14:11.590 [2024-07-22 11:11:16.758305] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8c5c0, cid 4, qid 0 01:14:11.590 [2024-07-22 11:11:16.758341] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.590 [2024-07-22 11:11:16.758347] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.590 [2024-07-22 11:11:16.758350] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.590 [2024-07-22 11:11:16.758354] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8c5c0) on tqpair=0x1a40830 01:14:11.590 [2024-07-22 11:11:16.758405] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 01:14:11.590 [2024-07-22 11:11:16.758413] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 01:14:11.590 [2024-07-22 11:11:16.758420] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.590 [2024-07-22 11:11:16.758424] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a40830) 01:14:11.590 [2024-07-22 11:11:16.758430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.590 [2024-07-22 11:11:16.758445] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8c5c0, cid 4, qid 0 01:14:11.590 [2024-07-22 11:11:16.758491] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:14:11.590 [2024-07-22 11:11:16.758497] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:14:11.590 [2024-07-22 11:11:16.758500] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:14:11.590 [2024-07-22 11:11:16.758504] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a40830): datao=0, datal=4096, cccid=4 01:14:11.591 [2024-07-22 11:11:16.758509] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a8c5c0) on tqpair(0x1a40830): expected_datao=0, payload_size=4096 01:14:11.591 [2024-07-22 11:11:16.758513] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.758520] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.758523] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.758532] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.591 [2024-07-22 11:11:16.758538] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.591 [2024-07-22 11:11:16.758541] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.758545] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8c5c0) on tqpair=0x1a40830 01:14:11.591 [2024-07-22 11:11:16.758553] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 01:14:11.591 [2024-07-22 11:11:16.758570] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 01:14:11.591 [2024-07-22 11:11:16.758579] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 01:14:11.591 [2024-07-22 11:11:16.758586] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.758589] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a40830) 01:14:11.591 [2024-07-22 11:11:16.758595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.591 [2024-07-22 11:11:16.758611] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8c5c0, cid 4, qid 0 01:14:11.591 [2024-07-22 11:11:16.758679] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:14:11.591 [2024-07-22 11:11:16.758685] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:14:11.591 [2024-07-22 11:11:16.758689] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.758692] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a40830): datao=0, datal=4096, cccid=4 01:14:11.591 [2024-07-22 11:11:16.758697] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a8c5c0) on tqpair(0x1a40830): expected_datao=0, payload_size=4096 01:14:11.591 [2024-07-22 11:11:16.758701] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.758707] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.758711] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.758719] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.591 [2024-07-22 11:11:16.758725] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.591 [2024-07-22 11:11:16.758728] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.758732] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8c5c0) on tqpair=0x1a40830 01:14:11.591 [2024-07-22 11:11:16.758747] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 01:14:11.591 [2024-07-22 11:11:16.758756] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 01:14:11.591 [2024-07-22 11:11:16.758763] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.758766] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a40830) 01:14:11.591 [2024-07-22 11:11:16.758772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.591 [2024-07-22 11:11:16.758787] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8c5c0, cid 4, qid 0 01:14:11.591 [2024-07-22 11:11:16.758834] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:14:11.591 [2024-07-22 11:11:16.758839] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:14:11.591 [2024-07-22 11:11:16.758843] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.758857] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a40830): datao=0, datal=4096, cccid=4 01:14:11.591 [2024-07-22 11:11:16.758862] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a8c5c0) on tqpair(0x1a40830): expected_datao=0, payload_size=4096 01:14:11.591 [2024-07-22 11:11:16.758866] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.758872] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.758876] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.758885] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.591 [2024-07-22 11:11:16.758890] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.591 [2024-07-22 11:11:16.758894] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.758897] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8c5c0) on tqpair=0x1a40830 01:14:11.591 [2024-07-22 11:11:16.758904] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 01:14:11.591 [2024-07-22 11:11:16.758912] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 01:14:11.591 [2024-07-22 11:11:16.758922] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 01:14:11.591 [2024-07-22 11:11:16.758928] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 01:14:11.591 [2024-07-22 11:11:16.758933] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 01:14:11.591 [2024-07-22 11:11:16.758938] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 01:14:11.591 [2024-07-22 11:11:16.758944] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 01:14:11.591 [2024-07-22 11:11:16.758948] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 01:14:11.591 [2024-07-22 11:11:16.758954] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 01:14:11.591 [2024-07-22 11:11:16.758970] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.758974] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a40830) 01:14:11.591 [2024-07-22 11:11:16.758979] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.591 [2024-07-22 11:11:16.758986] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.758990] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.758993] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a40830) 01:14:11.591 [2024-07-22 11:11:16.758999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 01:14:11.591 [2024-07-22 11:11:16.759018] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8c5c0, cid 4, qid 0 01:14:11.591 [2024-07-22 11:11:16.759024] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8c740, cid 5, qid 0 01:14:11.591 [2024-07-22 11:11:16.759074] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.591 [2024-07-22 11:11:16.759080] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.591 [2024-07-22 11:11:16.759083] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.759087] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8c5c0) on tqpair=0x1a40830 01:14:11.591 [2024-07-22 11:11:16.759093] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.591 [2024-07-22 11:11:16.759098] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.591 [2024-07-22 11:11:16.759102] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.759106] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8c740) on tqpair=0x1a40830 01:14:11.591 [2024-07-22 11:11:16.759115] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.759118] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a40830) 01:14:11.591 [2024-07-22 11:11:16.759124] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.591 [2024-07-22 11:11:16.759139] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8c740, cid 5, qid 0 01:14:11.591 [2024-07-22 11:11:16.759181] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.591 [2024-07-22 11:11:16.759187] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.591 [2024-07-22 11:11:16.759190] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.759194] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8c740) on tqpair=0x1a40830 01:14:11.591 [2024-07-22 11:11:16.759203] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.759207] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a40830) 01:14:11.591 [2024-07-22 11:11:16.759213] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.591 [2024-07-22 11:11:16.759228] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8c740, cid 5, qid 0 01:14:11.591 [2024-07-22 11:11:16.759268] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.591 [2024-07-22 11:11:16.759274] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.591 [2024-07-22 11:11:16.759277] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.759281] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8c740) on tqpair=0x1a40830 01:14:11.591 [2024-07-22 11:11:16.759290] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.759294] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a40830) 01:14:11.591 [2024-07-22 11:11:16.759300] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.591 [2024-07-22 11:11:16.759314] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8c740, cid 5, qid 0 01:14:11.591 [2024-07-22 11:11:16.759356] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.591 [2024-07-22 11:11:16.759361] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.591 [2024-07-22 11:11:16.759365] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.759369] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8c740) on tqpair=0x1a40830 01:14:11.591 [2024-07-22 11:11:16.759383] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.759388] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a40830) 01:14:11.591 [2024-07-22 11:11:16.759394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.591 [2024-07-22 11:11:16.759400] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.759404] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a40830) 01:14:11.591 [2024-07-22 11:11:16.759410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.591 [2024-07-22 11:11:16.759416] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.759420] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1a40830) 01:14:11.591 [2024-07-22 11:11:16.759426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.591 [2024-07-22 11:11:16.759433] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.759436] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1a40830) 01:14:11.591 [2024-07-22 11:11:16.759442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.591 [2024-07-22 11:11:16.759457] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8c740, cid 5, qid 0 01:14:11.591 [2024-07-22 11:11:16.759462] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8c5c0, cid 4, qid 0 01:14:11.591 [2024-07-22 11:11:16.759467] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8c8c0, cid 6, qid 0 01:14:11.591 [2024-07-22 11:11:16.759471] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8ca40, cid 7, qid 0 01:14:11.591 [2024-07-22 11:11:16.759591] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:14:11.591 [2024-07-22 11:11:16.759597] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:14:11.591 [2024-07-22 11:11:16.759600] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.759604] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a40830): datao=0, datal=8192, cccid=5 01:14:11.591 [2024-07-22 11:11:16.759609] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a8c740) on tqpair(0x1a40830): expected_datao=0, payload_size=8192 01:14:11.591 [2024-07-22 11:11:16.759613] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.759631] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.759635] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.759640] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:14:11.591 [2024-07-22 11:11:16.759645] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:14:11.591 [2024-07-22 11:11:16.759649] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.759653] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a40830): datao=0, datal=512, cccid=4 01:14:11.591 [2024-07-22 11:11:16.759657] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a8c5c0) on tqpair(0x1a40830): expected_datao=0, payload_size=512 01:14:11.591 [2024-07-22 11:11:16.759662] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.591 [2024-07-22 11:11:16.759667] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:14:11.592 [2024-07-22 11:11:16.759671] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:14:11.592 [2024-07-22 11:11:16.759677] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:14:11.592 [2024-07-22 11:11:16.759682] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:14:11.592 [2024-07-22 11:11:16.759685] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:14:11.592 [2024-07-22 11:11:16.759689] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a40830): datao=0, datal=512, cccid=6 01:14:11.592 [2024-07-22 11:11:16.759694] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a8c8c0) on tqpair(0x1a40830): expected_datao=0, payload_size=512 01:14:11.592 [2024-07-22 11:11:16.759698] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.592 [2024-07-22 11:11:16.759704] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:14:11.592 [2024-07-22 11:11:16.759708] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:14:11.592 [2024-07-22 11:11:16.759713] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:14:11.592 [2024-07-22 11:11:16.759718] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:14:11.592 [2024-07-22 11:11:16.759722] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:14:11.592 [2024-07-22 11:11:16.759725] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a40830): datao=0, datal=4096, cccid=7 01:14:11.592 [2024-07-22 11:11:16.759730] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a8ca40) on tqpair(0x1a40830): expected_datao=0, payload_size=4096 01:14:11.592 [2024-07-22 11:11:16.759734] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.592 [2024-07-22 11:11:16.759740] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:14:11.592 [2024-07-22 11:11:16.759744] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:14:11.592 [2024-07-22 11:11:16.759751] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.592 [2024-07-22 11:11:16.759756] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.592 [2024-07-22 11:11:16.759760] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.592 [2024-07-22 11:11:16.759764] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8c740) on tqpair=0x1a40830 01:14:11.592 [2024-07-22 11:11:16.759776] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.592 [2024-07-22 11:11:16.759782] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.592 [2024-07-22 11:11:16.759785] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.592 [2024-07-22 11:11:16.759790] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8c5c0) on tqpair=0x1a40830 01:14:11.592 [2024-07-22 11:11:16.759804] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.592 [2024-07-22 11:11:16.759809] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.592 [2024-07-22 11:11:16.759813] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.592 [2024-07-22 11:11:16.759817] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8c8c0) on tqpair=0x1a40830 01:14:11.592 [2024-07-22 11:11:16.759823] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.592 [2024-07-22 11:11:16.759829] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.592 ===================================================== 01:14:11.592 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:14:11.592 ===================================================== 01:14:11.592 Controller Capabilities/Features 01:14:11.592 ================================ 01:14:11.592 Vendor ID: 8086 01:14:11.592 Subsystem Vendor ID: 8086 01:14:11.592 Serial Number: SPDK00000000000001 01:14:11.592 Model Number: SPDK bdev Controller 01:14:11.592 Firmware Version: 24.09 01:14:11.592 Recommended Arb Burst: 6 01:14:11.592 IEEE OUI Identifier: e4 d2 5c 01:14:11.592 Multi-path I/O 01:14:11.592 May have multiple subsystem ports: Yes 01:14:11.592 May have multiple controllers: Yes 01:14:11.592 Associated with SR-IOV VF: No 01:14:11.592 Max Data Transfer Size: 131072 01:14:11.592 Max Number of Namespaces: 32 01:14:11.592 Max Number of I/O Queues: 127 01:14:11.592 NVMe Specification Version (VS): 1.3 01:14:11.592 NVMe Specification Version (Identify): 1.3 01:14:11.592 Maximum Queue Entries: 128 01:14:11.592 Contiguous Queues Required: Yes 01:14:11.592 Arbitration Mechanisms Supported 01:14:11.592 Weighted Round Robin: Not Supported 01:14:11.592 Vendor Specific: Not Supported 01:14:11.592 Reset Timeout: 15000 ms 01:14:11.592 Doorbell Stride: 4 bytes 01:14:11.592 NVM Subsystem Reset: Not Supported 01:14:11.592 Command Sets Supported 01:14:11.592 NVM Command Set: Supported 01:14:11.592 Boot Partition: Not Supported 01:14:11.592 Memory Page Size Minimum: 4096 bytes 01:14:11.592 Memory Page Size Maximum: 4096 bytes 01:14:11.592 Persistent Memory Region: Not Supported 01:14:11.592 Optional Asynchronous Events Supported 01:14:11.592 Namespace Attribute Notices: Supported 01:14:11.592 Firmware Activation Notices: Not Supported 01:14:11.592 ANA Change Notices: Not Supported 01:14:11.592 PLE Aggregate Log Change Notices: Not Supported 01:14:11.592 LBA Status Info Alert Notices: Not Supported 01:14:11.592 EGE Aggregate Log Change Notices: Not Supported 01:14:11.592 Normal NVM Subsystem Shutdown event: Not Supported 01:14:11.592 Zone Descriptor Change Notices: Not Supported 01:14:11.592 Discovery Log Change Notices: Not Supported 01:14:11.592 Controller Attributes 01:14:11.592 128-bit Host Identifier: Supported 01:14:11.592 Non-Operational Permissive Mode: Not Supported 01:14:11.592 NVM Sets: Not Supported 01:14:11.592 Read Recovery Levels: Not Supported 01:14:11.592 Endurance Groups: Not Supported 01:14:11.592 Predictable Latency Mode: Not Supported 01:14:11.592 Traffic Based Keep ALive: Not Supported 01:14:11.592 Namespace Granularity: Not Supported 01:14:11.592 SQ Associations: Not Supported 01:14:11.592 UUID List: Not Supported 01:14:11.592 Multi-Domain Subsystem: Not Supported 01:14:11.592 Fixed Capacity Management: Not Supported 01:14:11.592 Variable Capacity Management: Not Supported 01:14:11.592 Delete Endurance Group: Not Supported 01:14:11.592 Delete NVM Set: Not Supported 01:14:11.592 Extended LBA Formats Supported: Not Supported 01:14:11.592 Flexible Data Placement Supported: Not Supported 01:14:11.592 01:14:11.592 Controller Memory Buffer Support 01:14:11.592 ================================ 01:14:11.592 Supported: No 01:14:11.592 01:14:11.592 Persistent Memory Region Support 01:14:11.592 ================================ 01:14:11.592 Supported: No 01:14:11.592 01:14:11.592 Admin Command Set Attributes 01:14:11.592 ============================ 01:14:11.592 Security Send/Receive: Not Supported 01:14:11.592 Format NVM: Not Supported 01:14:11.592 Firmware Activate/Download: Not Supported 01:14:11.592 Namespace Management: Not Supported 01:14:11.592 Device Self-Test: Not Supported 01:14:11.592 Directives: Not Supported 01:14:11.592 NVMe-MI: Not Supported 01:14:11.592 Virtualization Management: Not Supported 01:14:11.592 Doorbell Buffer Config: Not Supported 01:14:11.592 Get LBA Status Capability: Not Supported 01:14:11.592 Command & Feature Lockdown Capability: Not Supported 01:14:11.592 Abort Command Limit: 4 01:14:11.592 Async Event Request Limit: 4 01:14:11.592 Number of Firmware Slots: N/A 01:14:11.592 Firmware Slot 1 Read-Only: N/A 01:14:11.592 Firmware Activation Without Reset: N/A 01:14:11.592 Multiple Update Detection Support: N/A 01:14:11.592 Firmware Update Granularity: No Information Provided 01:14:11.592 Per-Namespace SMART Log: No 01:14:11.592 Asymmetric Namespace Access Log Page: Not Supported 01:14:11.592 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 01:14:11.592 Command Effects Log Page: Supported 01:14:11.592 Get Log Page Extended Data: Supported 01:14:11.592 Telemetry Log Pages: Not Supported 01:14:11.592 Persistent Event Log Pages: Not Supported 01:14:11.592 Supported Log Pages Log Page: May Support 01:14:11.592 Commands Supported & Effects Log Page: Not Supported 01:14:11.592 Feature Identifiers & Effects Log Page:May Support 01:14:11.592 NVMe-MI Commands & Effects Log Page: May Support 01:14:11.592 Data Area 4 for Telemetry Log: Not Supported 01:14:11.592 Error Log Page Entries Supported: 128 01:14:11.592 Keep Alive: Supported 01:14:11.592 Keep Alive Granularity: 10000 ms 01:14:11.592 01:14:11.592 NVM Command Set Attributes 01:14:11.592 ========================== 01:14:11.592 Submission Queue Entry Size 01:14:11.592 Max: 64 01:14:11.592 Min: 64 01:14:11.592 Completion Queue Entry Size 01:14:11.592 Max: 16 01:14:11.592 Min: 16 01:14:11.592 Number of Namespaces: 32 01:14:11.592 Compare Command: Supported 01:14:11.592 Write Uncorrectable Command: Not Supported 01:14:11.592 Dataset Management Command: Supported 01:14:11.592 Write Zeroes Command: Supported 01:14:11.592 Set Features Save Field: Not Supported 01:14:11.592 Reservations: Supported 01:14:11.592 Timestamp: Not Supported 01:14:11.592 Copy: Supported 01:14:11.592 Volatile Write Cache: Present 01:14:11.592 Atomic Write Unit (Normal): 1 01:14:11.592 Atomic Write Unit (PFail): 1 01:14:11.592 Atomic Compare & Write Unit: 1 01:14:11.592 Fused Compare & Write: Supported 01:14:11.592 Scatter-Gather List 01:14:11.592 SGL Command Set: Supported 01:14:11.592 SGL Keyed: Supported 01:14:11.592 SGL Bit Bucket Descriptor: Not Supported 01:14:11.592 SGL Metadata Pointer: Not Supported 01:14:11.592 Oversized SGL: Not Supported 01:14:11.592 SGL Metadata Address: Not Supported 01:14:11.592 SGL Offset: Supported 01:14:11.592 Transport SGL Data Block: Not Supported 01:14:11.592 Replay Protected Memory Block: Not Supported 01:14:11.592 01:14:11.592 Firmware Slot Information 01:14:11.592 ========================= 01:14:11.592 Active slot: 1 01:14:11.592 Slot 1 Firmware Revision: 24.09 01:14:11.592 01:14:11.592 01:14:11.592 Commands Supported and Effects 01:14:11.592 ============================== 01:14:11.592 Admin Commands 01:14:11.592 -------------- 01:14:11.592 Get Log Page (02h): Supported 01:14:11.592 Identify (06h): Supported 01:14:11.592 Abort (08h): Supported 01:14:11.592 Set Features (09h): Supported 01:14:11.592 Get Features (0Ah): Supported 01:14:11.593 Asynchronous Event Request (0Ch): Supported 01:14:11.593 Keep Alive (18h): Supported 01:14:11.593 I/O Commands 01:14:11.593 ------------ 01:14:11.593 Flush (00h): Supported LBA-Change 01:14:11.593 Write (01h): Supported LBA-Change 01:14:11.593 Read (02h): Supported 01:14:11.593 Compare (05h): Supported 01:14:11.593 Write Zeroes (08h): Supported LBA-Change 01:14:11.593 Dataset Management (09h): Supported LBA-Change 01:14:11.593 Copy (19h): Supported LBA-Change 01:14:11.593 01:14:11.593 Error Log 01:14:11.593 ========= 01:14:11.593 01:14:11.593 Arbitration 01:14:11.593 =========== 01:14:11.593 Arbitration Burst: 1 01:14:11.593 01:14:11.593 Power Management 01:14:11.593 ================ 01:14:11.593 Number of Power States: 1 01:14:11.593 Current Power State: Power State #0 01:14:11.593 Power State #0: 01:14:11.593 Max Power: 0.00 W 01:14:11.593 Non-Operational State: Operational 01:14:11.593 Entry Latency: Not Reported 01:14:11.593 Exit Latency: Not Reported 01:14:11.593 Relative Read Throughput: 0 01:14:11.593 Relative Read Latency: 0 01:14:11.593 Relative Write Throughput: 0 01:14:11.593 Relative Write Latency: 0 01:14:11.593 Idle Power: Not Reported 01:14:11.593 Active Power: Not Reported 01:14:11.593 Non-Operational Permissive Mode: Not Supported 01:14:11.593 01:14:11.593 Health Information 01:14:11.593 ================== 01:14:11.593 Critical Warnings: 01:14:11.593 Available Spare Space: OK 01:14:11.593 Temperature: OK 01:14:11.593 Device Reliability: OK 01:14:11.593 Read Only: No 01:14:11.593 Volatile Memory Backup: OK 01:14:11.593 Current Temperature: 0 Kelvin (-273 Celsius) 01:14:11.593 Temperature Threshold: 0 Kelvin (-273 Celsius) 01:14:11.593 Available Spare: 0% 01:14:11.593 Available Spare Threshold: 0% 01:14:11.593 Life Percentage Used:[2024-07-22 11:11:16.759832] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.593 [2024-07-22 11:11:16.759836] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8ca40) on tqpair=0x1a40830 01:14:11.593 [2024-07-22 11:11:16.759960] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.593 [2024-07-22 11:11:16.759966] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1a40830) 01:14:11.593 [2024-07-22 11:11:16.759972] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.593 [2024-07-22 11:11:16.759991] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8ca40, cid 7, qid 0 01:14:11.593 [2024-07-22 11:11:16.760034] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.593 [2024-07-22 11:11:16.760040] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.593 [2024-07-22 11:11:16.760043] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.593 [2024-07-22 11:11:16.760047] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8ca40) on tqpair=0x1a40830 01:14:11.593 [2024-07-22 11:11:16.760081] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 01:14:11.593 [2024-07-22 11:11:16.760090] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8bfc0) on tqpair=0x1a40830 01:14:11.593 [2024-07-22 11:11:16.760097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:14:11.593 [2024-07-22 11:11:16.760102] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8c140) on tqpair=0x1a40830 01:14:11.593 [2024-07-22 11:11:16.760107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:14:11.593 [2024-07-22 11:11:16.760112] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8c2c0) on tqpair=0x1a40830 01:14:11.593 [2024-07-22 11:11:16.760117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:14:11.593 [2024-07-22 11:11:16.760122] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8c440) on tqpair=0x1a40830 01:14:11.593 [2024-07-22 11:11:16.760126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:14:11.593 [2024-07-22 11:11:16.760134] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.593 [2024-07-22 11:11:16.760138] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.593 [2024-07-22 11:11:16.760141] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a40830) 01:14:11.593 [2024-07-22 11:11:16.760148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.593 [2024-07-22 11:11:16.760164] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8c440, cid 3, qid 0 01:14:11.593 [2024-07-22 11:11:16.760201] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.593 [2024-07-22 11:11:16.760207] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.593 [2024-07-22 11:11:16.760211] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.593 [2024-07-22 11:11:16.760214] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8c440) on tqpair=0x1a40830 01:14:11.593 [2024-07-22 11:11:16.760221] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.593 [2024-07-22 11:11:16.760224] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.593 [2024-07-22 11:11:16.760228] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a40830) 01:14:11.593 [2024-07-22 11:11:16.760234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.593 [2024-07-22 11:11:16.760251] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8c440, cid 3, qid 0 01:14:11.593 [2024-07-22 11:11:16.760308] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.593 [2024-07-22 11:11:16.760314] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.593 [2024-07-22 11:11:16.760317] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.593 [2024-07-22 11:11:16.760321] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8c440) on tqpair=0x1a40830 01:14:11.593 [2024-07-22 11:11:16.760326] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 01:14:11.593 [2024-07-22 11:11:16.760331] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 01:14:11.593 [2024-07-22 11:11:16.760339] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.593 [2024-07-22 11:11:16.760343] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.593 [2024-07-22 11:11:16.760347] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a40830) 01:14:11.593 [2024-07-22 11:11:16.760353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.593 [2024-07-22 11:11:16.760367] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8c440, cid 3, qid 0 01:14:11.593 [2024-07-22 11:11:16.760408] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.593 [2024-07-22 11:11:16.760414] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.593 [2024-07-22 11:11:16.760417] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.593 [2024-07-22 11:11:16.760421] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8c440) on tqpair=0x1a40830 01:14:11.593 [2024-07-22 11:11:16.760431] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.593 [2024-07-22 11:11:16.760435] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.593 [2024-07-22 11:11:16.760438] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a40830) 01:14:11.593 [2024-07-22 11:11:16.760444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.593 [2024-07-22 11:11:16.760458] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8c440, cid 3, qid 0 01:14:11.593 [2024-07-22 11:11:16.760499] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.593 [2024-07-22 11:11:16.760505] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.593 [2024-07-22 11:11:16.760508] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.593 [2024-07-22 11:11:16.760512] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8c440) on tqpair=0x1a40830 01:14:11.593 [2024-07-22 11:11:16.760520] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.593 [2024-07-22 11:11:16.760525] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.593 [2024-07-22 11:11:16.760528] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a40830) 01:14:11.593 [2024-07-22 11:11:16.760534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.593 [2024-07-22 11:11:16.760549] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8c440, cid 3, qid 0 01:14:11.593 [2024-07-22 11:11:16.760590] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.593 [2024-07-22 11:11:16.760595] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.593 [2024-07-22 11:11:16.760599] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.593 [2024-07-22 11:11:16.760603] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8c440) on tqpair=0x1a40830 01:14:11.593 [2024-07-22 11:11:16.760611] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.593 [2024-07-22 11:11:16.760615] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.593 [2024-07-22 11:11:16.760619] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a40830) 01:14:11.593 [2024-07-22 11:11:16.760625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.593 [2024-07-22 11:11:16.760639] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8c440, cid 3, qid 0 01:14:11.593 [2024-07-22 11:11:16.760680] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.593 [2024-07-22 11:11:16.760686] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.593 [2024-07-22 11:11:16.760689] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.593 [2024-07-22 11:11:16.760693] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8c440) on tqpair=0x1a40830 01:14:11.593 [2024-07-22 11:11:16.760701] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.593 [2024-07-22 11:11:16.760706] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.593 [2024-07-22 11:11:16.760709] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a40830) 01:14:11.593 [2024-07-22 11:11:16.760715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.593 [2024-07-22 11:11:16.760729] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8c440, cid 3, qid 0 01:14:11.593 [2024-07-22 11:11:16.760770] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.593 [2024-07-22 11:11:16.760776] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.593 [2024-07-22 11:11:16.760779] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.593 [2024-07-22 11:11:16.760783] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8c440) on tqpair=0x1a40830 01:14:11.593 [2024-07-22 11:11:16.760791] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.593 [2024-07-22 11:11:16.760796] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.593 [2024-07-22 11:11:16.760799] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a40830) 01:14:11.593 [2024-07-22 11:11:16.760805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.593 [2024-07-22 11:11:16.760820] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8c440, cid 3, qid 0 01:14:11.593 [2024-07-22 11:11:16.764864] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.593 [2024-07-22 11:11:16.764882] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.593 [2024-07-22 11:11:16.764886] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.593 [2024-07-22 11:11:16.764891] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8c440) on tqpair=0x1a40830 01:14:11.593 [2024-07-22 11:11:16.764902] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:14:11.593 [2024-07-22 11:11:16.764906] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:14:11.593 [2024-07-22 11:11:16.764910] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a40830) 01:14:11.593 [2024-07-22 11:11:16.764917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:14:11.593 [2024-07-22 11:11:16.764937] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8c440, cid 3, qid 0 01:14:11.593 [2024-07-22 11:11:16.764981] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:14:11.593 [2024-07-22 11:11:16.764987] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:14:11.593 [2024-07-22 11:11:16.764990] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:14:11.593 [2024-07-22 11:11:16.764994] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8c440) on tqpair=0x1a40830 01:14:11.593 [2024-07-22 11:11:16.765001] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 01:14:11.593 0% 01:14:11.593 Data Units Read: 0 01:14:11.593 Data Units Written: 0 01:14:11.593 Host Read Commands: 0 01:14:11.593 Host Write Commands: 0 01:14:11.593 Controller Busy Time: 0 minutes 01:14:11.593 Power Cycles: 0 01:14:11.593 Power On Hours: 0 hours 01:14:11.593 Unsafe Shutdowns: 0 01:14:11.593 Unrecoverable Media Errors: 0 01:14:11.593 Lifetime Error Log Entries: 0 01:14:11.593 Warning Temperature Time: 0 minutes 01:14:11.593 Critical Temperature Time: 0 minutes 01:14:11.594 01:14:11.594 Number of Queues 01:14:11.594 ================ 01:14:11.594 Number of I/O Submission Queues: 127 01:14:11.594 Number of I/O Completion Queues: 127 01:14:11.594 01:14:11.594 Active Namespaces 01:14:11.594 ================= 01:14:11.594 Namespace ID:1 01:14:11.594 Error Recovery Timeout: Unlimited 01:14:11.594 Command Set Identifier: NVM (00h) 01:14:11.594 Deallocate: Supported 01:14:11.594 Deallocated/Unwritten Error: Not Supported 01:14:11.594 Deallocated Read Value: Unknown 01:14:11.594 Deallocate in Write Zeroes: Not Supported 01:14:11.594 Deallocated Guard Field: 0xFFFF 01:14:11.594 Flush: Supported 01:14:11.594 Reservation: Supported 01:14:11.594 Namespace Sharing Capabilities: Multiple Controllers 01:14:11.594 Size (in LBAs): 131072 (0GiB) 01:14:11.594 Capacity (in LBAs): 131072 (0GiB) 01:14:11.594 Utilization (in LBAs): 131072 (0GiB) 01:14:11.594 NGUID: ABCDEF0123456789ABCDEF0123456789 01:14:11.594 EUI64: ABCDEF0123456789 01:14:11.594 UUID: dd9cc267-fa6c-46f6-9fa4-cfd5a1226bfd 01:14:11.594 Thin Provisioning: Not Supported 01:14:11.594 Per-NS Atomic Units: Yes 01:14:11.594 Atomic Boundary Size (Normal): 0 01:14:11.594 Atomic Boundary Size (PFail): 0 01:14:11.594 Atomic Boundary Offset: 0 01:14:11.594 Maximum Single Source Range Length: 65535 01:14:11.594 Maximum Copy Length: 65535 01:14:11.594 Maximum Source Range Count: 1 01:14:11.594 NGUID/EUI64 Never Reused: No 01:14:11.594 Namespace Write Protected: No 01:14:11.594 Number of LBA Formats: 1 01:14:11.594 Current LBA Format: LBA Format #00 01:14:11.594 LBA Format #00: Data Size: 512 Metadata Size: 0 01:14:11.594 01:14:11.594 11:11:16 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 01:14:11.853 11:11:16 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:14:11.853 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 01:14:11.853 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:14:11.853 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:14:11.853 11:11:16 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 01:14:11.853 11:11:16 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 01:14:11.853 11:11:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 01:14:11.853 11:11:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 01:14:11.853 11:11:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:14:11.853 11:11:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 01:14:11.853 11:11:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 01:14:11.853 11:11:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:14:11.853 rmmod nvme_tcp 01:14:11.853 rmmod nvme_fabrics 01:14:11.853 rmmod nvme_keyring 01:14:11.853 11:11:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:14:11.853 11:11:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 01:14:11.853 11:11:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 01:14:11.853 11:11:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 88949 ']' 01:14:11.853 11:11:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 88949 01:14:11.853 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 88949 ']' 01:14:11.853 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 88949 01:14:11.853 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 01:14:11.853 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:14:11.853 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88949 01:14:11.853 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:14:11.853 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:14:11.853 killing process with pid 88949 01:14:11.853 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88949' 01:14:11.853 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 88949 01:14:11.853 11:11:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 88949 01:14:12.113 11:11:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:14:12.113 11:11:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:14:12.113 11:11:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:14:12.113 11:11:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:14:12.113 11:11:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 01:14:12.113 11:11:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:14:12.113 11:11:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:14:12.113 11:11:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:14:12.113 11:11:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:14:12.113 ************************************ 01:14:12.113 END TEST nvmf_identify 01:14:12.113 ************************************ 01:14:12.113 01:14:12.113 real 0m2.587s 01:14:12.113 user 0m6.581s 01:14:12.113 sys 0m0.804s 01:14:12.113 11:11:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 01:14:12.113 11:11:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:14:12.113 11:11:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:14:12.113 11:11:17 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 01:14:12.113 11:11:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:14:12.113 11:11:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:14:12.113 11:11:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:14:12.113 ************************************ 01:14:12.113 START TEST nvmf_perf 01:14:12.113 ************************************ 01:14:12.113 11:11:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 01:14:12.373 * Looking for test storage... 01:14:12.373 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:14:12.373 11:11:17 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:14:12.373 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 01:14:12.373 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:14:12.373 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:14:12.373 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:14:12.373 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:14:12.373 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:14:12.373 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:14:12.373 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:14:12.373 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:14:12.373 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:14:12.373 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:14:12.373 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:14:12.373 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:14:12.373 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:14:12.373 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:14:12.373 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:14:12.373 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:14:12.373 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:14:12.373 11:11:17 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:14:12.373 11:11:17 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:14:12.373 11:11:17 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:14:12.373 11:11:17 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:12.373 11:11:17 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:12.373 11:11:17 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:12.373 11:11:17 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 01:14:12.373 11:11:17 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:12.373 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 01:14:12.373 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:14:12.373 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:14:12.373 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:14:12.374 Cannot find device "nvmf_tgt_br" 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:14:12.374 Cannot find device "nvmf_tgt_br2" 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:14:12.374 Cannot find device "nvmf_tgt_br" 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:14:12.374 Cannot find device "nvmf_tgt_br2" 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 01:14:12.374 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:14:12.633 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:14:12.633 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:14:12.633 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:14:12.633 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 01:14:12.633 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:14:12.633 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:14:12.633 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 01:14:12.633 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:14:12.633 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:14:12.633 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:14:12.633 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:14:12.633 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:14:12.633 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:14:12.633 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:14:12.633 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:14:12.633 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:14:12.633 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:14:12.633 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:14:12.633 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:14:12.633 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:14:12.633 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:14:12.633 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:14:12.633 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:14:12.633 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:14:12.633 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:14:12.633 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:14:12.633 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:14:12.633 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:14:12.633 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:14:12.633 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:14:12.633 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:14:12.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:14:12.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 01:14:12.633 01:14:12.633 --- 10.0.0.2 ping statistics --- 01:14:12.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:14:12.633 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 01:14:12.892 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:14:12.892 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:14:12.892 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 01:14:12.892 01:14:12.892 --- 10.0.0.3 ping statistics --- 01:14:12.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:14:12.892 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 01:14:12.892 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:14:12.892 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:14:12.892 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 01:14:12.892 01:14:12.892 --- 10.0.0.1 ping statistics --- 01:14:12.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:14:12.892 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 01:14:12.892 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:14:12.892 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 01:14:12.892 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:14:12.892 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:14:12.892 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:14:12.892 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:14:12.892 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:14:12.892 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:14:12.892 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:14:12.892 11:11:17 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 01:14:12.892 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:14:12.892 11:11:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 01:14:12.892 11:11:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 01:14:12.892 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=89163 01:14:12.892 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:14:12.892 11:11:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 89163 01:14:12.892 11:11:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 89163 ']' 01:14:12.893 11:11:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:14:12.893 11:11:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 01:14:12.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:14:12.893 11:11:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:14:12.893 11:11:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 01:14:12.893 11:11:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 01:14:12.893 [2024-07-22 11:11:17.952895] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:14:12.893 [2024-07-22 11:11:17.952960] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:14:12.893 [2024-07-22 11:11:18.097744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:14:13.152 [2024-07-22 11:11:18.167683] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:14:13.152 [2024-07-22 11:11:18.167746] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:14:13.152 [2024-07-22 11:11:18.167755] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:14:13.152 [2024-07-22 11:11:18.167764] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:14:13.152 [2024-07-22 11:11:18.167771] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:14:13.152 [2024-07-22 11:11:18.167903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:14:13.152 [2024-07-22 11:11:18.169027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:14:13.152 [2024-07-22 11:11:18.168906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:14:13.152 [2024-07-22 11:11:18.169031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:14:13.152 [2024-07-22 11:11:18.241703] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:14:13.720 11:11:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:14:13.720 11:11:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 01:14:13.720 11:11:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:14:13.720 11:11:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 01:14:13.720 11:11:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 01:14:13.720 11:11:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:14:13.720 11:11:18 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:14:13.720 11:11:18 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 01:14:14.289 11:11:19 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 01:14:14.289 11:11:19 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 01:14:14.289 11:11:19 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 01:14:14.289 11:11:19 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:14:14.548 11:11:19 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 01:14:14.548 11:11:19 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 01:14:14.548 11:11:19 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 01:14:14.548 11:11:19 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 01:14:14.549 11:11:19 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:14:14.807 [2024-07-22 11:11:19.843494] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:14:14.807 11:11:19 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:14:15.068 11:11:20 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 01:14:15.068 11:11:20 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:14:15.068 11:11:20 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 01:14:15.068 11:11:20 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 01:14:15.327 11:11:20 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:14:15.587 [2024-07-22 11:11:20.584072] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:14:15.587 11:11:20 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:14:15.587 11:11:20 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 01:14:15.587 11:11:20 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 01:14:15.587 11:11:20 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 01:14:15.587 11:11:20 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 01:14:17.046 Initializing NVMe Controllers 01:14:17.046 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 01:14:17.046 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 01:14:17.046 Initialization complete. Launching workers. 01:14:17.046 ======================================================== 01:14:17.046 Latency(us) 01:14:17.047 Device Information : IOPS MiB/s Average min max 01:14:17.047 PCIE (0000:00:10.0) NSID 1 from core 0: 21097.06 82.41 1517.78 290.19 8988.71 01:14:17.047 ======================================================== 01:14:17.047 Total : 21097.06 82.41 1517.78 290.19 8988.71 01:14:17.047 01:14:17.047 11:11:21 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:14:18.045 Initializing NVMe Controllers 01:14:18.045 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:14:18.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:14:18.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:14:18.045 Initialization complete. Launching workers. 01:14:18.045 ======================================================== 01:14:18.045 Latency(us) 01:14:18.045 Device Information : IOPS MiB/s Average min max 01:14:18.045 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2676.33 10.45 373.46 96.13 4361.88 01:14:18.045 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.92 0.49 8068.34 6989.88 12019.88 01:14:18.045 ======================================================== 01:14:18.045 Total : 2801.25 10.94 716.61 96.13 12019.88 01:14:18.045 01:14:18.357 11:11:23 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:14:19.739 Initializing NVMe Controllers 01:14:19.739 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:14:19.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:14:19.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:14:19.739 Initialization complete. Launching workers. 01:14:19.739 ======================================================== 01:14:19.739 Latency(us) 01:14:19.739 Device Information : IOPS MiB/s Average min max 01:14:19.739 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10760.00 42.03 2976.35 559.56 6370.72 01:14:19.739 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4000.00 15.62 8043.49 6974.18 12815.92 01:14:19.739 ======================================================== 01:14:19.739 Total : 14760.00 57.66 4349.56 559.56 12815.92 01:14:19.739 01:14:19.739 11:11:24 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 01:14:19.739 11:11:24 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:14:22.269 Initializing NVMe Controllers 01:14:22.269 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:14:22.269 Controller IO queue size 128, less than required. 01:14:22.269 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:14:22.269 Controller IO queue size 128, less than required. 01:14:22.269 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:14:22.269 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:14:22.269 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:14:22.269 Initialization complete. Launching workers. 01:14:22.269 ======================================================== 01:14:22.269 Latency(us) 01:14:22.269 Device Information : IOPS MiB/s Average min max 01:14:22.269 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2249.59 562.40 57613.51 25603.19 88170.35 01:14:22.269 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 660.85 165.21 198262.68 39297.99 323311.21 01:14:22.269 ======================================================== 01:14:22.269 Total : 2910.44 727.61 89549.63 25603.19 323311.21 01:14:22.269 01:14:22.269 11:11:27 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 01:14:22.269 Initializing NVMe Controllers 01:14:22.269 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:14:22.269 Controller IO queue size 128, less than required. 01:14:22.269 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:14:22.269 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 01:14:22.269 Controller IO queue size 128, less than required. 01:14:22.269 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:14:22.269 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 01:14:22.269 WARNING: Some requested NVMe devices were skipped 01:14:22.269 No valid NVMe controllers or AIO or URING devices found 01:14:22.269 11:11:27 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 01:14:24.802 Initializing NVMe Controllers 01:14:24.802 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:14:24.802 Controller IO queue size 128, less than required. 01:14:24.802 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:14:24.802 Controller IO queue size 128, less than required. 01:14:24.802 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:14:24.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:14:24.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:14:24.802 Initialization complete. Launching workers. 01:14:24.802 01:14:24.802 ==================== 01:14:24.802 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 01:14:24.802 TCP transport: 01:14:24.802 polls: 13827 01:14:24.802 idle_polls: 9870 01:14:24.802 sock_completions: 3957 01:14:24.802 nvme_completions: 6467 01:14:24.802 submitted_requests: 9624 01:14:24.802 queued_requests: 1 01:14:24.802 01:14:24.802 ==================== 01:14:24.802 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 01:14:24.802 TCP transport: 01:14:24.802 polls: 14139 01:14:24.802 idle_polls: 9106 01:14:24.802 sock_completions: 5033 01:14:24.802 nvme_completions: 6923 01:14:24.802 submitted_requests: 10426 01:14:24.802 queued_requests: 1 01:14:24.802 ======================================================== 01:14:24.802 Latency(us) 01:14:24.802 Device Information : IOPS MiB/s Average min max 01:14:24.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1615.33 403.83 80211.91 41291.06 138018.05 01:14:24.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1729.25 432.31 74770.23 26995.73 151964.83 01:14:24.802 ======================================================== 01:14:24.802 Total : 3344.58 836.14 77398.40 26995.73 151964.83 01:14:24.802 01:14:24.802 11:11:29 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 01:14:24.802 11:11:29 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:14:25.060 11:11:30 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 01:14:25.060 11:11:30 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 01:14:25.060 11:11:30 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 01:14:25.319 11:11:30 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=534bcbea-70e3-47cf-b8c4-e1b2b3179d47 01:14:25.319 11:11:30 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 534bcbea-70e3-47cf-b8c4-e1b2b3179d47 01:14:25.319 11:11:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=534bcbea-70e3-47cf-b8c4-e1b2b3179d47 01:14:25.319 11:11:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 01:14:25.319 11:11:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 01:14:25.319 11:11:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 01:14:25.319 11:11:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 01:14:25.319 11:11:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 01:14:25.319 { 01:14:25.319 "uuid": "534bcbea-70e3-47cf-b8c4-e1b2b3179d47", 01:14:25.319 "name": "lvs_0", 01:14:25.319 "base_bdev": "Nvme0n1", 01:14:25.319 "total_data_clusters": 1278, 01:14:25.319 "free_clusters": 1278, 01:14:25.319 "block_size": 4096, 01:14:25.319 "cluster_size": 4194304 01:14:25.319 } 01:14:25.319 ]' 01:14:25.319 11:11:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="534bcbea-70e3-47cf-b8c4-e1b2b3179d47") .free_clusters' 01:14:25.319 11:11:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1278 01:14:25.319 11:11:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="534bcbea-70e3-47cf-b8c4-e1b2b3179d47") .cluster_size' 01:14:25.578 11:11:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 01:14:25.578 11:11:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5112 01:14:25.578 5112 01:14:25.578 11:11:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5112 01:14:25.578 11:11:30 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 01:14:25.578 11:11:30 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 534bcbea-70e3-47cf-b8c4-e1b2b3179d47 lbd_0 5112 01:14:25.578 11:11:30 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=f87cbeab-665c-4801-adcb-0332455eade0 01:14:25.578 11:11:30 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore f87cbeab-665c-4801-adcb-0332455eade0 lvs_n_0 01:14:26.145 11:11:31 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=8d37c278-35fc-46c5-a878-30de1890e1f4 01:14:26.145 11:11:31 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 8d37c278-35fc-46c5-a878-30de1890e1f4 01:14:26.145 11:11:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=8d37c278-35fc-46c5-a878-30de1890e1f4 01:14:26.145 11:11:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 01:14:26.145 11:11:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 01:14:26.145 11:11:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 01:14:26.145 11:11:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 01:14:26.145 11:11:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 01:14:26.145 { 01:14:26.145 "uuid": "534bcbea-70e3-47cf-b8c4-e1b2b3179d47", 01:14:26.145 "name": "lvs_0", 01:14:26.145 "base_bdev": "Nvme0n1", 01:14:26.145 "total_data_clusters": 1278, 01:14:26.145 "free_clusters": 0, 01:14:26.145 "block_size": 4096, 01:14:26.145 "cluster_size": 4194304 01:14:26.145 }, 01:14:26.145 { 01:14:26.145 "uuid": "8d37c278-35fc-46c5-a878-30de1890e1f4", 01:14:26.145 "name": "lvs_n_0", 01:14:26.145 "base_bdev": "f87cbeab-665c-4801-adcb-0332455eade0", 01:14:26.145 "total_data_clusters": 1276, 01:14:26.145 "free_clusters": 1276, 01:14:26.145 "block_size": 4096, 01:14:26.145 "cluster_size": 4194304 01:14:26.145 } 01:14:26.145 ]' 01:14:26.145 11:11:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="8d37c278-35fc-46c5-a878-30de1890e1f4") .free_clusters' 01:14:26.145 11:11:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1276 01:14:26.145 11:11:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="8d37c278-35fc-46c5-a878-30de1890e1f4") .cluster_size' 01:14:26.145 11:11:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 01:14:26.145 11:11:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5104 01:14:26.145 5104 01:14:26.145 11:11:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5104 01:14:26.145 11:11:31 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 01:14:26.145 11:11:31 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8d37c278-35fc-46c5-a878-30de1890e1f4 lbd_nest_0 5104 01:14:26.403 11:11:31 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=52e7088d-a0cc-43a8-99a1-1e86e1da4b01 01:14:26.403 11:11:31 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:14:26.660 11:11:31 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 01:14:26.660 11:11:31 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 52e7088d-a0cc-43a8-99a1-1e86e1da4b01 01:14:26.917 11:11:31 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:14:26.917 11:11:32 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 01:14:26.917 11:11:32 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 01:14:26.917 11:11:32 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 01:14:26.917 11:11:32 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 01:14:26.917 11:11:32 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:14:27.483 Initializing NVMe Controllers 01:14:27.483 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:14:27.483 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 01:14:27.483 WARNING: Some requested NVMe devices were skipped 01:14:27.483 No valid NVMe controllers or AIO or URING devices found 01:14:27.483 11:11:32 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 01:14:27.483 11:11:32 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:14:37.478 Initializing NVMe Controllers 01:14:37.478 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:14:37.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:14:37.478 Initialization complete. Launching workers. 01:14:37.478 ======================================================== 01:14:37.478 Latency(us) 01:14:37.478 Device Information : IOPS MiB/s Average min max 01:14:37.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 828.59 103.57 1206.71 297.00 7274.16 01:14:37.478 ======================================================== 01:14:37.478 Total : 828.59 103.57 1206.71 297.00 7274.16 01:14:37.478 01:14:37.478 11:11:42 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 01:14:37.478 11:11:42 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 01:14:37.478 11:11:42 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:14:38.044 Initializing NVMe Controllers 01:14:38.044 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:14:38.044 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 01:14:38.044 WARNING: Some requested NVMe devices were skipped 01:14:38.044 No valid NVMe controllers or AIO or URING devices found 01:14:38.044 11:11:42 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 01:14:38.044 11:11:42 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:14:48.016 Initializing NVMe Controllers 01:14:48.016 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:14:48.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:14:48.016 Initialization complete. Launching workers. 01:14:48.016 ======================================================== 01:14:48.016 Latency(us) 01:14:48.016 Device Information : IOPS MiB/s Average min max 01:14:48.016 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1336.80 167.10 23968.22 6157.44 63836.99 01:14:48.016 ======================================================== 01:14:48.016 Total : 1336.80 167.10 23968.22 6157.44 63836.99 01:14:48.016 01:14:48.274 11:11:53 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 01:14:48.274 11:11:53 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 01:14:48.274 11:11:53 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:14:48.531 Initializing NVMe Controllers 01:14:48.531 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:14:48.531 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 01:14:48.531 WARNING: Some requested NVMe devices were skipped 01:14:48.531 No valid NVMe controllers or AIO or URING devices found 01:14:48.531 11:11:53 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 01:14:48.531 11:11:53 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:15:00.734 Initializing NVMe Controllers 01:15:00.734 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:15:00.734 Controller IO queue size 128, less than required. 01:15:00.734 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:15:00.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:15:00.734 Initialization complete. Launching workers. 01:15:00.734 ======================================================== 01:15:00.734 Latency(us) 01:15:00.734 Device Information : IOPS MiB/s Average min max 01:15:00.734 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4255.44 531.93 30127.39 11966.54 62418.33 01:15:00.734 ======================================================== 01:15:00.734 Total : 4255.44 531.93 30127.39 11966.54 62418.33 01:15:00.734 01:15:00.734 11:12:03 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:15:00.734 11:12:04 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 52e7088d-a0cc-43a8-99a1-1e86e1da4b01 01:15:00.734 11:12:04 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 01:15:00.734 11:12:04 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f87cbeab-665c-4801-adcb-0332455eade0 01:15:00.734 11:12:04 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 01:15:00.734 11:12:05 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 01:15:00.734 11:12:05 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 01:15:00.734 11:12:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 01:15:00.734 11:12:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 01:15:00.734 11:12:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:15:00.734 11:12:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 01:15:00.734 11:12:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 01:15:00.734 11:12:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:15:00.734 rmmod nvme_tcp 01:15:00.734 rmmod nvme_fabrics 01:15:00.734 rmmod nvme_keyring 01:15:00.734 11:12:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:15:00.734 11:12:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 01:15:00.734 11:12:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 01:15:00.734 11:12:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 89163 ']' 01:15:00.734 11:12:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 89163 01:15:00.734 11:12:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 89163 ']' 01:15:00.734 11:12:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 89163 01:15:00.734 11:12:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 01:15:00.734 11:12:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:15:00.734 11:12:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89163 01:15:00.734 killing process with pid 89163 01:15:00.734 11:12:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:15:00.734 11:12:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:15:00.734 11:12:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89163' 01:15:00.734 11:12:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 89163 01:15:00.734 11:12:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 89163 01:15:01.300 11:12:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:15:01.300 11:12:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:15:01.300 11:12:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:15:01.300 11:12:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:15:01.300 11:12:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 01:15:01.300 11:12:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:15:01.300 11:12:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:15:01.300 11:12:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:15:01.300 11:12:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:15:01.300 01:15:01.300 real 0m49.064s 01:15:01.300 user 3m2.579s 01:15:01.300 sys 0m12.592s 01:15:01.300 11:12:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 01:15:01.300 11:12:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 01:15:01.300 ************************************ 01:15:01.300 END TEST nvmf_perf 01:15:01.300 ************************************ 01:15:01.300 11:12:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:15:01.300 11:12:06 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 01:15:01.300 11:12:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:15:01.300 11:12:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:15:01.300 11:12:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:15:01.300 ************************************ 01:15:01.300 START TEST nvmf_fio_host 01:15:01.300 ************************************ 01:15:01.300 11:12:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 01:15:01.558 * Looking for test storage... 01:15:01.558 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:15:01.558 11:12:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:15:01.559 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:15:01.559 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:15:01.559 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:15:01.559 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:15:01.559 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:15:01.559 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 01:15:01.559 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:15:01.559 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:15:01.559 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:15:01.559 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:15:01.559 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:15:01.559 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:15:01.559 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:15:01.559 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:15:01.559 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:15:01.559 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:15:01.559 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:15:01.559 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:15:01.559 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:15:01.559 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:15:01.559 Cannot find device "nvmf_tgt_br" 01:15:01.559 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 01:15:01.559 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:15:01.559 Cannot find device "nvmf_tgt_br2" 01:15:01.559 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 01:15:01.559 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:15:01.559 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:15:01.559 Cannot find device "nvmf_tgt_br" 01:15:01.559 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 01:15:01.559 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:15:01.559 Cannot find device "nvmf_tgt_br2" 01:15:01.559 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 01:15:01.559 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:15:01.816 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:15:01.816 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:15:01.816 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:15:01.816 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 01:15:01.816 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:15:01.816 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:15:01.816 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 01:15:01.816 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:15:01.816 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:15:01.816 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:15:01.816 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:15:01.816 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:15:01.816 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:15:01.816 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:15:01.816 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:15:01.816 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:15:01.816 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:15:01.816 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:15:01.816 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:15:01.816 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:15:01.816 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:15:01.816 11:12:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:15:01.816 11:12:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:15:01.816 11:12:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:15:01.816 11:12:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:15:02.074 11:12:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:15:02.074 11:12:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:15:02.074 11:12:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:15:02.074 11:12:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:15:02.074 11:12:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:15:02.074 11:12:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:15:02.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:15:02.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 01:15:02.074 01:15:02.074 --- 10.0.0.2 ping statistics --- 01:15:02.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:15:02.074 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 01:15:02.074 11:12:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:15:02.074 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:15:02.074 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 01:15:02.074 01:15:02.074 --- 10.0.0.3 ping statistics --- 01:15:02.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:15:02.074 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 01:15:02.074 11:12:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:15:02.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:15:02.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 01:15:02.074 01:15:02.074 --- 10.0.0.1 ping statistics --- 01:15:02.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:15:02.074 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 01:15:02.074 11:12:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:15:02.074 11:12:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 01:15:02.074 11:12:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:15:02.074 11:12:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:15:02.074 11:12:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:15:02.074 11:12:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:15:02.074 11:12:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:15:02.074 11:12:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:15:02.074 11:12:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:15:02.074 11:12:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 01:15:02.074 11:12:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 01:15:02.074 11:12:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 01:15:02.074 11:12:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 01:15:02.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:02.074 11:12:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=89963 01:15:02.074 11:12:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:15:02.074 11:12:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 89963 01:15:02.074 11:12:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 89963 ']' 01:15:02.074 11:12:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:02.074 11:12:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 01:15:02.074 11:12:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:02.074 11:12:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 01:15:02.074 11:12:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 01:15:02.074 11:12:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:15:02.074 [2024-07-22 11:12:07.214002] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:15:02.074 [2024-07-22 11:12:07.214068] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:15:02.330 [2024-07-22 11:12:07.359225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:15:02.330 [2024-07-22 11:12:07.405735] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:15:02.330 [2024-07-22 11:12:07.405783] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:15:02.330 [2024-07-22 11:12:07.405793] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:15:02.331 [2024-07-22 11:12:07.405801] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:15:02.331 [2024-07-22 11:12:07.405808] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:15:02.331 [2024-07-22 11:12:07.406001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:15:02.331 [2024-07-22 11:12:07.406817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:15:02.331 [2024-07-22 11:12:07.406912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:15:02.331 [2024-07-22 11:12:07.406913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:15:02.331 [2024-07-22 11:12:07.450475] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:15:02.896 11:12:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:15:02.896 11:12:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 01:15:02.896 11:12:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:15:03.154 [2024-07-22 11:12:08.220949] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:15:03.154 11:12:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 01:15:03.154 11:12:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 01:15:03.154 11:12:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 01:15:03.154 11:12:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 01:15:03.412 Malloc1 01:15:03.412 11:12:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:15:03.697 11:12:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:15:03.962 11:12:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:15:03.962 [2024-07-22 11:12:09.088086] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:15:03.962 11:12:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:15:04.222 11:12:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 01:15:04.222 11:12:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 01:15:04.222 11:12:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 01:15:04.222 11:12:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 01:15:04.222 11:12:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:15:04.222 11:12:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 01:15:04.222 11:12:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:15:04.222 11:12:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 01:15:04.222 11:12:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 01:15:04.222 11:12:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:15:04.222 11:12:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:15:04.222 11:12:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 01:15:04.222 11:12:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:15:04.222 11:12:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 01:15:04.222 11:12:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:15:04.222 11:12:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:15:04.222 11:12:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:15:04.222 11:12:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 01:15:04.222 11:12:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:15:04.222 11:12:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 01:15:04.222 11:12:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:15:04.222 11:12:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 01:15:04.222 11:12:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 01:15:04.502 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 01:15:04.502 fio-3.35 01:15:04.502 Starting 1 thread 01:15:07.079 01:15:07.079 test: (groupid=0, jobs=1): err= 0: pid=90040: Mon Jul 22 11:12:11 2024 01:15:07.079 read: IOPS=11.1k, BW=43.5MiB/s (45.6MB/s)(87.2MiB/2006msec) 01:15:07.079 slat (nsec): min=1554, max=417290, avg=1715.25, stdev=3403.52 01:15:07.079 clat (usec): min=3004, max=11133, avg=6012.56, stdev=434.82 01:15:07.079 lat (usec): min=3072, max=11135, avg=6014.28, stdev=434.88 01:15:07.079 clat percentiles (usec): 01:15:07.079 | 1.00th=[ 5080], 5.00th=[ 5407], 10.00th=[ 5538], 20.00th=[ 5735], 01:15:07.079 | 30.00th=[ 5800], 40.00th=[ 5932], 50.00th=[ 5997], 60.00th=[ 6063], 01:15:07.079 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6456], 95.00th=[ 6652], 01:15:07.079 | 99.00th=[ 7046], 99.50th=[ 7439], 99.90th=[ 9110], 99.95th=[10290], 01:15:07.079 | 99.99th=[11076] 01:15:07.079 bw ( KiB/s): min=43712, max=45224, per=99.98%, avg=44524.00, stdev=651.84, samples=4 01:15:07.079 iops : min=10928, max=11306, avg=11131.00, stdev=162.96, samples=4 01:15:07.079 write: IOPS=11.1k, BW=43.3MiB/s (45.4MB/s)(86.9MiB/2006msec); 0 zone resets 01:15:07.079 slat (nsec): min=1603, max=272458, avg=1764.35, stdev=2017.49 01:15:07.079 clat (usec): min=2850, max=10536, avg=5455.73, stdev=396.07 01:15:07.079 lat (usec): min=2865, max=10538, avg=5457.50, stdev=396.24 01:15:07.079 clat percentiles (usec): 01:15:07.079 | 1.00th=[ 4555], 5.00th=[ 4883], 10.00th=[ 5080], 20.00th=[ 5211], 01:15:07.079 | 30.00th=[ 5276], 40.00th=[ 5342], 50.00th=[ 5473], 60.00th=[ 5538], 01:15:07.079 | 70.00th=[ 5604], 80.00th=[ 5735], 90.00th=[ 5866], 95.00th=[ 5997], 01:15:07.079 | 99.00th=[ 6456], 99.50th=[ 6849], 99.90th=[ 9110], 99.95th=[ 9765], 01:15:07.079 | 99.99th=[10552] 01:15:07.079 bw ( KiB/s): min=43960, max=44928, per=100.00%, avg=44382.00, stdev=465.39, samples=4 01:15:07.079 iops : min=10990, max=11232, avg=11095.50, stdev=116.35, samples=4 01:15:07.079 lat (msec) : 4=0.23%, 10=99.73%, 20=0.04% 01:15:07.079 cpu : usr=66.73%, sys=26.23%, ctx=6, majf=0, minf=7 01:15:07.079 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 01:15:07.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:15:07.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:15:07.079 issued rwts: total=22334,22246,0,0 short=0,0,0,0 dropped=0,0,0,0 01:15:07.079 latency : target=0, window=0, percentile=100.00%, depth=128 01:15:07.079 01:15:07.079 Run status group 0 (all jobs): 01:15:07.079 READ: bw=43.5MiB/s (45.6MB/s), 43.5MiB/s-43.5MiB/s (45.6MB/s-45.6MB/s), io=87.2MiB (91.5MB), run=2006-2006msec 01:15:07.079 WRITE: bw=43.3MiB/s (45.4MB/s), 43.3MiB/s-43.3MiB/s (45.4MB/s-45.4MB/s), io=86.9MiB (91.1MB), run=2006-2006msec 01:15:07.079 11:12:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 01:15:07.079 11:12:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 01:15:07.079 11:12:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 01:15:07.079 11:12:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:15:07.079 11:12:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 01:15:07.079 11:12:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:15:07.079 11:12:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 01:15:07.079 11:12:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 01:15:07.079 11:12:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:15:07.079 11:12:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:15:07.079 11:12:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 01:15:07.079 11:12:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:15:07.079 11:12:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 01:15:07.079 11:12:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:15:07.079 11:12:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:15:07.079 11:12:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 01:15:07.079 11:12:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:15:07.079 11:12:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:15:07.079 11:12:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 01:15:07.079 11:12:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:15:07.079 11:12:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 01:15:07.079 11:12:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 01:15:07.079 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 01:15:07.079 fio-3.35 01:15:07.079 Starting 1 thread 01:15:09.605 01:15:09.605 test: (groupid=0, jobs=1): err= 0: pid=90084: Mon Jul 22 11:12:14 2024 01:15:09.605 read: IOPS=9608, BW=150MiB/s (157MB/s)(301MiB/2006msec) 01:15:09.605 slat (nsec): min=2517, max=90632, avg=2748.99, stdev=1443.92 01:15:09.605 clat (usec): min=2013, max=15340, avg=7684.86, stdev=2174.58 01:15:09.605 lat (usec): min=2016, max=15343, avg=7687.61, stdev=2174.66 01:15:09.605 clat percentiles (usec): 01:15:09.605 | 1.00th=[ 3490], 5.00th=[ 4293], 10.00th=[ 4948], 20.00th=[ 5800], 01:15:09.605 | 30.00th=[ 6521], 40.00th=[ 7046], 50.00th=[ 7570], 60.00th=[ 8094], 01:15:09.605 | 70.00th=[ 8586], 80.00th=[ 9372], 90.00th=[10421], 95.00th=[11338], 01:15:09.605 | 99.00th=[14091], 99.50th=[14353], 99.90th=[14746], 99.95th=[15008], 01:15:09.605 | 99.99th=[15139] 01:15:09.605 bw ( KiB/s): min=75136, max=82304, per=50.17%, avg=77128.00, stdev=3456.43, samples=4 01:15:09.605 iops : min= 4696, max= 5144, avg=4820.50, stdev=216.03, samples=4 01:15:09.605 write: IOPS=5523, BW=86.3MiB/s (90.5MB/s)(158MiB/1826msec); 0 zone resets 01:15:09.605 slat (usec): min=28, max=443, avg=30.23, stdev= 8.14 01:15:09.605 clat (usec): min=4899, max=17614, avg=9935.45, stdev=2087.25 01:15:09.605 lat (usec): min=4928, max=17643, avg=9965.68, stdev=2088.64 01:15:09.605 clat percentiles (usec): 01:15:09.605 | 1.00th=[ 6325], 5.00th=[ 7111], 10.00th=[ 7570], 20.00th=[ 8160], 01:15:09.605 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[10159], 01:15:09.605 | 70.00th=[10945], 80.00th=[11731], 90.00th=[12780], 95.00th=[13829], 01:15:09.605 | 99.00th=[15795], 99.50th=[16188], 99.90th=[16712], 99.95th=[16909], 01:15:09.605 | 99.99th=[16909] 01:15:09.605 bw ( KiB/s): min=77760, max=85216, per=90.75%, avg=80200.00, stdev=3460.48, samples=4 01:15:09.605 iops : min= 4860, max= 5326, avg=5012.50, stdev=216.28, samples=4 01:15:09.605 lat (msec) : 4=1.89%, 10=74.84%, 20=23.28% 01:15:09.605 cpu : usr=79.36%, sys=16.95%, ctx=7, majf=0, minf=4 01:15:09.605 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 01:15:09.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:15:09.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:15:09.605 issued rwts: total=19274,10086,0,0 short=0,0,0,0 dropped=0,0,0,0 01:15:09.605 latency : target=0, window=0, percentile=100.00%, depth=128 01:15:09.605 01:15:09.605 Run status group 0 (all jobs): 01:15:09.605 READ: bw=150MiB/s (157MB/s), 150MiB/s-150MiB/s (157MB/s-157MB/s), io=301MiB (316MB), run=2006-2006msec 01:15:09.605 WRITE: bw=86.3MiB/s (90.5MB/s), 86.3MiB/s-86.3MiB/s (90.5MB/s-90.5MB/s), io=158MiB (165MB), run=1826-1826msec 01:15:09.605 11:12:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:15:09.605 11:12:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 01:15:09.605 11:12:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 01:15:09.605 11:12:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 01:15:09.605 11:12:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 01:15:09.605 11:12:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 01:15:09.605 11:12:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 01:15:09.605 11:12:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:15:09.605 11:12:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 01:15:09.605 11:12:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 01:15:09.605 11:12:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 01:15:09.605 11:12:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.2 01:15:09.864 Nvme0n1 01:15:09.864 11:12:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 01:15:10.122 11:12:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=f7f769d4-d87f-4bbb-ba79-54706a1d9e35 01:15:10.122 11:12:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb f7f769d4-d87f-4bbb-ba79-54706a1d9e35 01:15:10.122 11:12:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=f7f769d4-d87f-4bbb-ba79-54706a1d9e35 01:15:10.122 11:12:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 01:15:10.122 11:12:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 01:15:10.122 11:12:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 01:15:10.122 11:12:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 01:15:10.122 11:12:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 01:15:10.122 { 01:15:10.122 "uuid": "f7f769d4-d87f-4bbb-ba79-54706a1d9e35", 01:15:10.122 "name": "lvs_0", 01:15:10.122 "base_bdev": "Nvme0n1", 01:15:10.122 "total_data_clusters": 4, 01:15:10.122 "free_clusters": 4, 01:15:10.122 "block_size": 4096, 01:15:10.122 "cluster_size": 1073741824 01:15:10.122 } 01:15:10.122 ]' 01:15:10.122 11:12:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="f7f769d4-d87f-4bbb-ba79-54706a1d9e35") .free_clusters' 01:15:10.381 11:12:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=4 01:15:10.381 11:12:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="f7f769d4-d87f-4bbb-ba79-54706a1d9e35") .cluster_size' 01:15:10.381 4096 01:15:10.381 11:12:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 01:15:10.381 11:12:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4096 01:15:10.381 11:12:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4096 01:15:10.381 11:12:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 01:15:10.381 c2297597-4653-49f4-b7b9-97acfc497e3a 01:15:10.381 11:12:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 01:15:10.640 11:12:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 01:15:10.898 11:12:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 01:15:11.156 11:12:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 01:15:11.156 11:12:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 01:15:11.156 11:12:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 01:15:11.156 11:12:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:15:11.156 11:12:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 01:15:11.156 11:12:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:15:11.156 11:12:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 01:15:11.156 11:12:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 01:15:11.156 11:12:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:15:11.157 11:12:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:15:11.157 11:12:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 01:15:11.157 11:12:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:15:11.157 11:12:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 01:15:11.157 11:12:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:15:11.157 11:12:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:15:11.157 11:12:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 01:15:11.157 11:12:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:15:11.157 11:12:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:15:11.157 11:12:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 01:15:11.157 11:12:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:15:11.157 11:12:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 01:15:11.157 11:12:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 01:15:11.157 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 01:15:11.157 fio-3.35 01:15:11.157 Starting 1 thread 01:15:13.690 01:15:13.690 test: (groupid=0, jobs=1): err= 0: pid=90192: Mon Jul 22 11:12:18 2024 01:15:13.690 read: IOPS=7906, BW=30.9MiB/s (32.4MB/s)(62.0MiB/2007msec) 01:15:13.690 slat (nsec): min=1563, max=389597, avg=1928.36, stdev=4039.28 01:15:13.690 clat (usec): min=3014, max=15669, avg=8497.46, stdev=805.54 01:15:13.690 lat (usec): min=3025, max=15670, avg=8499.39, stdev=805.30 01:15:13.690 clat percentiles (usec): 01:15:13.690 | 1.00th=[ 6849], 5.00th=[ 7373], 10.00th=[ 7570], 20.00th=[ 7898], 01:15:13.690 | 30.00th=[ 8094], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8586], 01:15:13.690 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[ 9372], 95.00th=[ 9765], 01:15:13.690 | 99.00th=[10814], 99.50th=[11469], 99.90th=[13435], 99.95th=[14615], 01:15:13.690 | 99.99th=[15533] 01:15:13.690 bw ( KiB/s): min=31217, max=32144, per=99.87%, avg=31584.25, stdev=395.29, samples=4 01:15:13.690 iops : min= 7804, max= 8036, avg=7896.00, stdev=98.90, samples=4 01:15:13.690 write: IOPS=7878, BW=30.8MiB/s (32.3MB/s)(61.8MiB/2007msec); 0 zone resets 01:15:13.690 slat (nsec): min=1606, max=291011, avg=1972.28, stdev=2745.51 01:15:13.691 clat (usec): min=2918, max=14413, avg=7661.12, stdev=719.16 01:15:13.691 lat (usec): min=2934, max=14415, avg=7663.09, stdev=719.08 01:15:13.691 clat percentiles (usec): 01:15:13.691 | 1.00th=[ 6128], 5.00th=[ 6652], 10.00th=[ 6849], 20.00th=[ 7111], 01:15:13.691 | 30.00th=[ 7308], 40.00th=[ 7439], 50.00th=[ 7635], 60.00th=[ 7767], 01:15:13.691 | 70.00th=[ 7963], 80.00th=[ 8160], 90.00th=[ 8455], 95.00th=[ 8848], 01:15:13.691 | 99.00th=[ 9634], 99.50th=[10159], 99.90th=[12518], 99.95th=[13435], 01:15:13.691 | 99.99th=[13698] 01:15:13.691 bw ( KiB/s): min=31104, max=32039, per=99.92%, avg=31489.75, stdev=439.33, samples=4 01:15:13.691 iops : min= 7776, max= 8009, avg=7872.25, stdev=109.52, samples=4 01:15:13.691 lat (msec) : 4=0.04%, 10=97.73%, 20=2.23% 01:15:13.691 cpu : usr=67.65%, sys=26.62%, ctx=11, majf=0, minf=7 01:15:13.691 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 01:15:13.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:15:13.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:15:13.691 issued rwts: total=15868,15812,0,0 short=0,0,0,0 dropped=0,0,0,0 01:15:13.691 latency : target=0, window=0, percentile=100.00%, depth=128 01:15:13.691 01:15:13.691 Run status group 0 (all jobs): 01:15:13.691 READ: bw=30.9MiB/s (32.4MB/s), 30.9MiB/s-30.9MiB/s (32.4MB/s-32.4MB/s), io=62.0MiB (65.0MB), run=2007-2007msec 01:15:13.691 WRITE: bw=30.8MiB/s (32.3MB/s), 30.8MiB/s-30.8MiB/s (32.3MB/s-32.3MB/s), io=61.8MiB (64.8MB), run=2007-2007msec 01:15:13.691 11:12:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 01:15:13.691 11:12:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 01:15:13.949 11:12:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=cd2ccd71-8b31-4672-bebd-e0a62f0c7d29 01:15:13.949 11:12:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb cd2ccd71-8b31-4672-bebd-e0a62f0c7d29 01:15:13.949 11:12:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=cd2ccd71-8b31-4672-bebd-e0a62f0c7d29 01:15:13.949 11:12:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 01:15:13.949 11:12:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 01:15:13.949 11:12:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 01:15:13.949 11:12:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 01:15:14.208 11:12:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 01:15:14.208 { 01:15:14.208 "uuid": "f7f769d4-d87f-4bbb-ba79-54706a1d9e35", 01:15:14.208 "name": "lvs_0", 01:15:14.208 "base_bdev": "Nvme0n1", 01:15:14.208 "total_data_clusters": 4, 01:15:14.208 "free_clusters": 0, 01:15:14.208 "block_size": 4096, 01:15:14.208 "cluster_size": 1073741824 01:15:14.208 }, 01:15:14.208 { 01:15:14.208 "uuid": "cd2ccd71-8b31-4672-bebd-e0a62f0c7d29", 01:15:14.208 "name": "lvs_n_0", 01:15:14.208 "base_bdev": "c2297597-4653-49f4-b7b9-97acfc497e3a", 01:15:14.208 "total_data_clusters": 1022, 01:15:14.208 "free_clusters": 1022, 01:15:14.208 "block_size": 4096, 01:15:14.208 "cluster_size": 4194304 01:15:14.208 } 01:15:14.208 ]' 01:15:14.208 11:12:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="cd2ccd71-8b31-4672-bebd-e0a62f0c7d29") .free_clusters' 01:15:14.208 11:12:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1022 01:15:14.208 11:12:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="cd2ccd71-8b31-4672-bebd-e0a62f0c7d29") .cluster_size' 01:15:14.208 4088 01:15:14.208 11:12:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 01:15:14.208 11:12:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4088 01:15:14.208 11:12:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4088 01:15:14.208 11:12:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 01:15:14.467 91dd71f5-1f89-41e7-928a-d5addb76830e 01:15:14.467 11:12:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 01:15:14.467 11:12:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 01:15:14.725 11:12:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 01:15:14.983 11:12:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 01:15:14.983 11:12:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 01:15:14.983 11:12:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 01:15:14.983 11:12:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:15:14.983 11:12:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 01:15:14.983 11:12:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:15:14.983 11:12:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 01:15:14.983 11:12:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 01:15:14.983 11:12:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:15:14.983 11:12:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:15:14.983 11:12:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 01:15:14.983 11:12:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:15:14.983 11:12:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 01:15:14.983 11:12:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:15:14.983 11:12:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:15:14.983 11:12:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:15:14.983 11:12:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 01:15:14.983 11:12:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:15:14.983 11:12:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 01:15:14.983 11:12:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:15:14.983 11:12:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 01:15:14.983 11:12:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 01:15:15.241 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 01:15:15.242 fio-3.35 01:15:15.242 Starting 1 thread 01:15:17.814 01:15:17.814 test: (groupid=0, jobs=1): err= 0: pid=90260: Mon Jul 22 11:12:22 2024 01:15:17.814 read: IOPS=6138, BW=24.0MiB/s (25.1MB/s)(48.1MiB/2008msec) 01:15:17.814 slat (nsec): min=1557, max=380403, avg=1907.65, stdev=4614.42 01:15:17.814 clat (usec): min=3144, max=25447, avg=10945.38, stdev=1299.69 01:15:17.814 lat (usec): min=3156, max=25452, avg=10947.28, stdev=1299.40 01:15:17.814 clat percentiles (usec): 01:15:17.814 | 1.00th=[ 8291], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10159], 01:15:17.814 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 01:15:17.814 | 70.00th=[11338], 80.00th=[11600], 90.00th=[12125], 95.00th=[12387], 01:15:17.814 | 99.00th=[14484], 99.50th=[19006], 99.90th=[23725], 99.95th=[23987], 01:15:17.814 | 99.99th=[25297] 01:15:17.814 bw ( KiB/s): min=24127, max=24944, per=99.84%, avg=24515.75, stdev=424.31, samples=4 01:15:17.814 iops : min= 6031, max= 6236, avg=6128.75, stdev=106.31, samples=4 01:15:17.814 write: IOPS=6121, BW=23.9MiB/s (25.1MB/s)(48.0MiB/2008msec); 0 zone resets 01:15:17.814 slat (nsec): min=1604, max=281119, avg=1933.79, stdev=2910.32 01:15:17.814 clat (usec): min=2863, max=23940, avg=9892.28, stdev=1449.16 01:15:17.814 lat (usec): min=2879, max=23941, avg=9894.22, stdev=1449.08 01:15:17.814 clat percentiles (usec): 01:15:17.814 | 1.00th=[ 7242], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 9110], 01:15:17.814 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10028], 01:15:17.814 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10814], 95.00th=[11207], 01:15:17.814 | 99.00th=[16712], 99.50th=[20317], 99.90th=[22938], 99.95th=[23462], 01:15:17.814 | 99.99th=[23987] 01:15:17.814 bw ( KiB/s): min=23808, max=25117, per=99.82%, avg=24439.25, stdev=540.78, samples=4 01:15:17.814 iops : min= 5952, max= 6279, avg=6109.75, stdev=135.09, samples=4 01:15:17.814 lat (msec) : 4=0.08%, 10=37.31%, 20=62.12%, 50=0.50% 01:15:17.814 cpu : usr=68.71%, sys=26.96%, ctx=19, majf=0, minf=7 01:15:17.814 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 01:15:17.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:15:17.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:15:17.814 issued rwts: total=12326,12291,0,0 short=0,0,0,0 dropped=0,0,0,0 01:15:17.814 latency : target=0, window=0, percentile=100.00%, depth=128 01:15:17.814 01:15:17.814 Run status group 0 (all jobs): 01:15:17.814 READ: bw=24.0MiB/s (25.1MB/s), 24.0MiB/s-24.0MiB/s (25.1MB/s-25.1MB/s), io=48.1MiB (50.5MB), run=2008-2008msec 01:15:17.814 WRITE: bw=23.9MiB/s (25.1MB/s), 23.9MiB/s-23.9MiB/s (25.1MB/s-25.1MB/s), io=48.0MiB (50.3MB), run=2008-2008msec 01:15:17.814 11:12:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 01:15:17.814 11:12:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 01:15:17.814 11:12:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 01:15:17.814 11:12:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 01:15:18.072 11:12:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 01:15:18.330 11:12:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 01:15:18.330 11:12:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 01:15:19.263 11:12:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 01:15:19.263 11:12:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 01:15:19.263 11:12:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 01:15:19.263 11:12:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 01:15:19.263 11:12:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 01:15:19.263 11:12:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:15:19.263 11:12:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 01:15:19.263 11:12:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 01:15:19.263 11:12:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:15:19.263 rmmod nvme_tcp 01:15:19.263 rmmod nvme_fabrics 01:15:19.263 rmmod nvme_keyring 01:15:19.522 11:12:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:15:19.522 11:12:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 01:15:19.522 11:12:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 01:15:19.522 11:12:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 89963 ']' 01:15:19.522 11:12:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 89963 01:15:19.522 11:12:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 89963 ']' 01:15:19.522 11:12:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 89963 01:15:19.522 11:12:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 01:15:19.522 11:12:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:15:19.522 11:12:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89963 01:15:19.522 11:12:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:15:19.522 11:12:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:15:19.522 killing process with pid 89963 01:15:19.522 11:12:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89963' 01:15:19.522 11:12:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 89963 01:15:19.522 11:12:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 89963 01:15:19.522 11:12:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:15:19.522 11:12:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:15:19.522 11:12:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:15:19.522 11:12:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:15:19.522 11:12:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 01:15:19.522 11:12:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:15:19.522 11:12:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:15:19.522 11:12:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:15:19.781 11:12:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:15:19.781 01:15:19.781 real 0m18.317s 01:15:19.781 user 1m16.884s 01:15:19.781 sys 0m5.309s 01:15:19.781 11:12:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 01:15:19.781 11:12:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 01:15:19.781 ************************************ 01:15:19.781 END TEST nvmf_fio_host 01:15:19.781 ************************************ 01:15:19.781 11:12:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:15:19.781 11:12:24 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 01:15:19.781 11:12:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:15:19.781 11:12:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:15:19.781 11:12:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:15:19.781 ************************************ 01:15:19.781 START TEST nvmf_failover 01:15:19.781 ************************************ 01:15:19.781 11:12:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 01:15:19.781 * Looking for test storage... 01:15:19.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:15:19.781 11:12:24 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:15:19.781 11:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 01:15:19.781 11:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:15:19.781 11:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:15:19.781 11:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:15:19.781 11:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:15:19.781 11:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:15:19.781 11:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:15:19.781 11:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:15:19.781 11:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:15:19.781 11:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:15:19.781 11:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:15:19.781 11:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:15:19.781 11:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:15:19.781 11:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:15:19.781 11:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:15:19.781 11:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:15:19.781 11:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:15:19.781 11:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:15:19.781 11:12:24 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:15:19.781 11:12:24 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:15:19.781 11:12:24 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:15:19.781 11:12:24 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:19.781 11:12:24 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:19.781 11:12:24 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:20.039 11:12:24 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 01:15:20.039 11:12:24 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:20.039 11:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 01:15:20.039 11:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:15:20.039 11:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:15:20.039 11:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:15:20.039 11:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:15:20.039 11:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:15:20.039 11:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:15:20.039 11:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:15:20.039 11:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 01:15:20.039 11:12:24 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 01:15:20.039 11:12:24 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:15:20.039 11:12:24 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:15:20.039 11:12:24 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:15:20.039 11:12:24 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 01:15:20.039 11:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:15:20.039 11:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:15:20.039 11:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 01:15:20.039 11:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 01:15:20.039 11:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 01:15:20.039 11:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:15:20.039 11:12:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:15:20.039 11:12:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:15:20.039 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:15:20.039 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:15:20.039 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:15:20.039 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:15:20.039 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:15:20.039 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 01:15:20.039 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:15:20.039 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:15:20.039 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:15:20.039 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:15:20.039 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:15:20.039 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:15:20.039 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:15:20.039 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:15:20.039 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:15:20.039 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:15:20.039 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:15:20.039 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:15:20.039 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:15:20.039 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:15:20.039 Cannot find device "nvmf_tgt_br" 01:15:20.039 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 01:15:20.039 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:15:20.039 Cannot find device "nvmf_tgt_br2" 01:15:20.039 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 01:15:20.039 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:15:20.039 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:15:20.039 Cannot find device "nvmf_tgt_br" 01:15:20.039 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 01:15:20.040 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:15:20.040 Cannot find device "nvmf_tgt_br2" 01:15:20.040 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 01:15:20.040 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:15:20.040 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:15:20.040 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:15:20.040 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:15:20.040 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 01:15:20.040 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:15:20.040 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:15:20.040 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 01:15:20.040 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:15:20.040 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:15:20.040 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:15:20.040 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:15:20.040 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:15:20.040 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:15:20.040 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:15:20.040 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:15:20.040 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:15:20.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:15:20.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 01:15:20.298 01:15:20.298 --- 10.0.0.2 ping statistics --- 01:15:20.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:15:20.298 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:15:20.298 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:15:20.298 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 01:15:20.298 01:15:20.298 --- 10.0.0.3 ping statistics --- 01:15:20.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:15:20.298 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:15:20.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:15:20.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 01:15:20.298 01:15:20.298 --- 10.0.0.1 ping statistics --- 01:15:20.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:15:20.298 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=90498 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 90498 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 90498 ']' 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:20.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 01:15:20.298 11:12:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:15:20.298 [2024-07-22 11:12:25.467299] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:15:20.298 [2024-07-22 11:12:25.467358] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:15:20.556 [2024-07-22 11:12:25.610255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 01:15:20.556 [2024-07-22 11:12:25.654103] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:15:20.556 [2024-07-22 11:12:25.655440] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:15:20.556 [2024-07-22 11:12:25.655458] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:15:20.556 [2024-07-22 11:12:25.655468] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:15:20.556 [2024-07-22 11:12:25.655474] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:15:20.556 [2024-07-22 11:12:25.655570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:15:20.556 [2024-07-22 11:12:25.656182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:15:20.556 [2024-07-22 11:12:25.656184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:15:20.556 [2024-07-22 11:12:25.696896] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:15:21.123 11:12:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:15:21.123 11:12:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 01:15:21.123 11:12:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:15:21.123 11:12:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 01:15:21.123 11:12:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:15:21.381 11:12:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:15:21.381 11:12:26 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:15:21.381 [2024-07-22 11:12:26.529387] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:15:21.381 11:12:26 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:15:21.639 Malloc0 01:15:21.639 11:12:26 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:15:21.898 11:12:26 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:15:22.157 11:12:27 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:15:22.157 [2024-07-22 11:12:27.262109] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:15:22.157 11:12:27 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 01:15:22.415 [2024-07-22 11:12:27.453993] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 01:15:22.415 11:12:27 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 01:15:22.415 [2024-07-22 11:12:27.621858] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 01:15:22.674 11:12:27 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=90550 01:15:22.674 11:12:27 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 01:15:22.674 11:12:27 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:15:22.674 11:12:27 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 90550 /var/tmp/bdevperf.sock 01:15:22.674 11:12:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 90550 ']' 01:15:22.674 11:12:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:15:22.674 11:12:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 01:15:22.674 11:12:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:15:22.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:15:22.674 11:12:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 01:15:22.674 11:12:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:15:23.609 11:12:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:15:23.609 11:12:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 01:15:23.609 11:12:28 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:15:23.609 NVMe0n1 01:15:23.609 11:12:28 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:15:23.867 01:15:23.867 11:12:29 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=90575 01:15:23.867 11:12:29 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:15:23.867 11:12:29 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 01:15:25.244 11:12:30 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:15:25.244 11:12:30 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 01:15:28.532 11:12:33 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:15:28.532 01:15:28.532 11:12:33 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 01:15:28.532 11:12:33 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 01:15:31.813 11:12:36 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:15:31.813 [2024-07-22 11:12:36.879114] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:15:31.813 11:12:36 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 01:15:32.749 11:12:37 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 01:15:33.007 11:12:38 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 90575 01:15:39.642 0 01:15:39.642 11:12:44 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 90550 01:15:39.642 11:12:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 90550 ']' 01:15:39.642 11:12:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 90550 01:15:39.642 11:12:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 01:15:39.642 11:12:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:15:39.642 11:12:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90550 01:15:39.642 killing process with pid 90550 01:15:39.642 11:12:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:15:39.642 11:12:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:15:39.642 11:12:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90550' 01:15:39.642 11:12:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 90550 01:15:39.642 11:12:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 90550 01:15:39.642 11:12:44 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:15:39.642 [2024-07-22 11:12:27.687383] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:15:39.642 [2024-07-22 11:12:27.687466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90550 ] 01:15:39.642 [2024-07-22 11:12:27.828540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:39.642 [2024-07-22 11:12:27.869861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:15:39.642 [2024-07-22 11:12:27.911173] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:15:39.642 Running I/O for 15 seconds... 01:15:39.642 [2024-07-22 11:12:30.217400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.642 [2024-07-22 11:12:30.217455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.642 [2024-07-22 11:12:30.217479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.642 [2024-07-22 11:12:30.217492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.642 [2024-07-22 11:12:30.217506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.642 [2024-07-22 11:12:30.217519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.642 [2024-07-22 11:12:30.217532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.642 [2024-07-22 11:12:30.217544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.642 [2024-07-22 11:12:30.217558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.642 [2024-07-22 11:12:30.217570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.642 [2024-07-22 11:12:30.217584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.642 [2024-07-22 11:12:30.217596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.642 [2024-07-22 11:12:30.217610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.642 [2024-07-22 11:12:30.217622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.642 [2024-07-22 11:12:30.217635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.642 [2024-07-22 11:12:30.217647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.642 [2024-07-22 11:12:30.217661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.642 [2024-07-22 11:12:30.217673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.642 [2024-07-22 11:12:30.217686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.642 [2024-07-22 11:12:30.217698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.642 [2024-07-22 11:12:30.217712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.642 [2024-07-22 11:12:30.217744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.642 [2024-07-22 11:12:30.217758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.642 [2024-07-22 11:12:30.217771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.642 [2024-07-22 11:12:30.217784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.642 [2024-07-22 11:12:30.217797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.642 [2024-07-22 11:12:30.217810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.642 [2024-07-22 11:12:30.217822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.642 [2024-07-22 11:12:30.217836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.642 [2024-07-22 11:12:30.217864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.642 [2024-07-22 11:12:30.217878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.642 [2024-07-22 11:12:30.217891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.642 [2024-07-22 11:12:30.217905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.642 [2024-07-22 11:12:30.217917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.642 [2024-07-22 11:12:30.217931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.642 [2024-07-22 11:12:30.217944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.642 [2024-07-22 11:12:30.217958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.642 [2024-07-22 11:12:30.217970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.642 [2024-07-22 11:12:30.217984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.642 [2024-07-22 11:12:30.217996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.642 [2024-07-22 11:12:30.218011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.642 [2024-07-22 11:12:30.218023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.642 [2024-07-22 11:12:30.218037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.642 [2024-07-22 11:12:30.218049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.642 [2024-07-22 11:12:30.218063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.642 [2024-07-22 11:12:30.218075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.642 [2024-07-22 11:12:30.218094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.643 [2024-07-22 11:12:30.218106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.218120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.218132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.218146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.218158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.218171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.218183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.218197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.218209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.218222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.218235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.218248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.218260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.218274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.218290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.218304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.218316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.218330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.218343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.218357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.218369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.218383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.218396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.218409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.218421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.218443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.218455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.218469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.218482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.218495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.218508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.218522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.218534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.218548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.643 [2024-07-22 11:12:30.218560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.218574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.643 [2024-07-22 11:12:30.218586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.218600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.643 [2024-07-22 11:12:30.218613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.218626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.643 [2024-07-22 11:12:30.218639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.218652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.643 [2024-07-22 11:12:30.218665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.218679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.643 [2024-07-22 11:12:30.218691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.218705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.643 [2024-07-22 11:12:30.218720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.218734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.643 [2024-07-22 11:12:30.218747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.218761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.218778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.218792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.218804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.218818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.218830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.218843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.218863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.218877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.218890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.218903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.218915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.218929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.218941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.218955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.218967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.218981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.218993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.219007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.219019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.219033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.219047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.219061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.219073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.219087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.219099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.219117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.219130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.219143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.219157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.219171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.219183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.219196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.219209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.219222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.643 [2024-07-22 11:12:30.219235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.643 [2024-07-22 11:12:30.219248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.644 [2024-07-22 11:12:30.219261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.219274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.644 [2024-07-22 11:12:30.219287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.219300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.644 [2024-07-22 11:12:30.219312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.219326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.644 [2024-07-22 11:12:30.219339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.219353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.644 [2024-07-22 11:12:30.219365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.219378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.644 [2024-07-22 11:12:30.219390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.219404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.644 [2024-07-22 11:12:30.219416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.219430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.644 [2024-07-22 11:12:30.219447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.219461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.644 [2024-07-22 11:12:30.219474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.219488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.644 [2024-07-22 11:12:30.219500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.219514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.644 [2024-07-22 11:12:30.219526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.219540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.644 [2024-07-22 11:12:30.219552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.219566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.644 [2024-07-22 11:12:30.219579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.219592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.644 [2024-07-22 11:12:30.219604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.219618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.644 [2024-07-22 11:12:30.219631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.219645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.644 [2024-07-22 11:12:30.219657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.219671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.644 [2024-07-22 11:12:30.219683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.219701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.644 [2024-07-22 11:12:30.219713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.219728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.644 [2024-07-22 11:12:30.219740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.219754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.644 [2024-07-22 11:12:30.219766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.219780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.644 [2024-07-22 11:12:30.219797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.219811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.644 [2024-07-22 11:12:30.219823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.219836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.644 [2024-07-22 11:12:30.219855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.219869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.644 [2024-07-22 11:12:30.219882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.219895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.644 [2024-07-22 11:12:30.219907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.219922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.644 [2024-07-22 11:12:30.219934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.219947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.644 [2024-07-22 11:12:30.219959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.219973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.644 [2024-07-22 11:12:30.219985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.219999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.644 [2024-07-22 11:12:30.220011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.220025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.644 [2024-07-22 11:12:30.220037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.220051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.644 [2024-07-22 11:12:30.220063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.220077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.644 [2024-07-22 11:12:30.220089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.220102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.644 [2024-07-22 11:12:30.220114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.220134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.644 [2024-07-22 11:12:30.220146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.220160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.644 [2024-07-22 11:12:30.220172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.220186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.644 [2024-07-22 11:12:30.220198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.220212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.644 [2024-07-22 11:12:30.220225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.220238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.644 [2024-07-22 11:12:30.220250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.220264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.644 [2024-07-22 11:12:30.220276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.220289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2015630 is same with the state(5) to be set 01:15:39.644 [2024-07-22 11:12:30.220304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:15:39.644 [2024-07-22 11:12:30.220314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:15:39.644 [2024-07-22 11:12:30.220324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98216 len:8 PRP1 0x0 PRP2 0x0 01:15:39.644 [2024-07-22 11:12:30.220335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.220348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:15:39.644 [2024-07-22 11:12:30.220357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:15:39.644 [2024-07-22 11:12:30.220367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98816 len:8 PRP1 0x0 PRP2 0x0 01:15:39.644 [2024-07-22 11:12:30.220378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.644 [2024-07-22 11:12:30.220391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:15:39.645 [2024-07-22 11:12:30.220400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:15:39.645 [2024-07-22 11:12:30.220409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98824 len:8 PRP1 0x0 PRP2 0x0 01:15:39.645 [2024-07-22 11:12:30.220421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.645 [2024-07-22 11:12:30.220434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:15:39.645 [2024-07-22 11:12:30.220443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:15:39.645 [2024-07-22 11:12:30.220452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98832 len:8 PRP1 0x0 PRP2 0x0 01:15:39.645 [2024-07-22 11:12:30.220468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.645 [2024-07-22 11:12:30.220481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:15:39.645 [2024-07-22 11:12:30.220490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:15:39.645 [2024-07-22 11:12:30.220499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98840 len:8 PRP1 0x0 PRP2 0x0 01:15:39.645 [2024-07-22 11:12:30.220512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.645 [2024-07-22 11:12:30.220525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:15:39.645 [2024-07-22 11:12:30.220534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:15:39.645 [2024-07-22 11:12:30.220543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98848 len:8 PRP1 0x0 PRP2 0x0 01:15:39.645 [2024-07-22 11:12:30.220556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.645 [2024-07-22 11:12:30.220568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:15:39.645 [2024-07-22 11:12:30.220576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:15:39.645 [2024-07-22 11:12:30.220586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98856 len:8 PRP1 0x0 PRP2 0x0 01:15:39.645 [2024-07-22 11:12:30.220598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.645 [2024-07-22 11:12:30.220610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:15:39.645 [2024-07-22 11:12:30.220619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:15:39.645 [2024-07-22 11:12:30.220628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98864 len:8 PRP1 0x0 PRP2 0x0 01:15:39.645 [2024-07-22 11:12:30.220640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.645 [2024-07-22 11:12:30.220652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:15:39.645 [2024-07-22 11:12:30.220661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:15:39.645 [2024-07-22 11:12:30.220670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98872 len:8 PRP1 0x0 PRP2 0x0 01:15:39.645 [2024-07-22 11:12:30.220682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.645 [2024-07-22 11:12:30.220694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:15:39.645 [2024-07-22 11:12:30.220703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:15:39.645 [2024-07-22 11:12:30.220712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98880 len:8 PRP1 0x0 PRP2 0x0 01:15:39.645 [2024-07-22 11:12:30.220724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.645 [2024-07-22 11:12:30.220737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:15:39.645 [2024-07-22 11:12:30.220746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:15:39.645 [2024-07-22 11:12:30.220756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98888 len:8 PRP1 0x0 PRP2 0x0 01:15:39.645 [2024-07-22 11:12:30.220768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.645 [2024-07-22 11:12:30.220781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:15:39.645 [2024-07-22 11:12:30.220790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:15:39.645 [2024-07-22 11:12:30.220803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98896 len:8 PRP1 0x0 PRP2 0x0 01:15:39.645 [2024-07-22 11:12:30.220815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.645 [2024-07-22 11:12:30.220828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:15:39.645 [2024-07-22 11:12:30.220837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:15:39.645 [2024-07-22 11:12:30.220855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98904 len:8 PRP1 0x0 PRP2 0x0 01:15:39.645 [2024-07-22 11:12:30.220869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.645 [2024-07-22 11:12:30.220881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:15:39.645 [2024-07-22 11:12:30.220890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:15:39.645 [2024-07-22 11:12:30.220899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98912 len:8 PRP1 0x0 PRP2 0x0 01:15:39.645 [2024-07-22 11:12:30.220910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.645 [2024-07-22 11:12:30.220923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:15:39.645 [2024-07-22 11:12:30.220932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:15:39.645 [2024-07-22 11:12:30.220941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98920 len:8 PRP1 0x0 PRP2 0x0 01:15:39.645 [2024-07-22 11:12:30.220953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.645 [2024-07-22 11:12:30.220965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:15:39.645 [2024-07-22 11:12:30.220975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:15:39.645 [2024-07-22 11:12:30.220984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98224 len:8 PRP1 0x0 PRP2 0x0 01:15:39.645 [2024-07-22 11:12:30.220996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.645 [2024-07-22 11:12:30.221008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:15:39.645 [2024-07-22 11:12:30.221017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:15:39.645 [2024-07-22 11:12:30.221027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98232 len:8 PRP1 0x0 PRP2 0x0 01:15:39.645 [2024-07-22 11:12:30.221039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.645 [2024-07-22 11:12:30.221051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:15:39.645 [2024-07-22 11:12:30.221060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:15:39.645 [2024-07-22 11:12:30.221069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98240 len:8 PRP1 0x0 PRP2 0x0 01:15:39.645 [2024-07-22 11:12:30.221081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.645 [2024-07-22 11:12:30.221093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:15:39.645 [2024-07-22 11:12:30.221102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:15:39.645 [2024-07-22 11:12:30.221113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98248 len:8 PRP1 0x0 PRP2 0x0 01:15:39.645 [2024-07-22 11:12:30.221125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.645 [2024-07-22 11:12:30.221138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:15:39.645 [2024-07-22 11:12:30.221153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:15:39.645 [2024-07-22 11:12:30.221162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98256 len:8 PRP1 0x0 PRP2 0x0 01:15:39.645 [2024-07-22 11:12:30.221174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.645 [2024-07-22 11:12:30.221186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:15:39.645 [2024-07-22 11:12:30.221195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:15:39.645 [2024-07-22 11:12:30.221204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98264 len:8 PRP1 0x0 PRP2 0x0 01:15:39.645 [2024-07-22 11:12:30.221218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.645 [2024-07-22 11:12:30.221230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:15:39.645 [2024-07-22 11:12:30.221248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:15:39.645 [2024-07-22 11:12:30.221258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98272 len:8 PRP1 0x0 PRP2 0x0 01:15:39.645 [2024-07-22 11:12:30.221270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.645 [2024-07-22 11:12:30.221282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:15:39.645 [2024-07-22 11:12:30.221291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:15:39.645 [2024-07-22 11:12:30.221300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98280 len:8 PRP1 0x0 PRP2 0x0 01:15:39.645 [2024-07-22 11:12:30.221312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.645 [2024-07-22 11:12:30.221360] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2015630 was disconnected and freed. reset controller. 01:15:39.645 [2024-07-22 11:12:30.221375] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 01:15:39.645 [2024-07-22 11:12:30.221422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:15:39.645 [2024-07-22 11:12:30.221437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.645 [2024-07-22 11:12:30.237424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:15:39.645 [2024-07-22 11:12:30.237465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.645 [2024-07-22 11:12:30.237485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:15:39.645 [2024-07-22 11:12:30.237502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.645 [2024-07-22 11:12:30.237520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:15:39.645 [2024-07-22 11:12:30.237536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.645 [2024-07-22 11:12:30.237553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:15:39.645 [2024-07-22 11:12:30.237625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff68e0 (9): Bad file descriptor 01:15:39.645 [2024-07-22 11:12:30.241297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:15:39.645 [2024-07-22 11:12:30.276653] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 01:15:39.646 [2024-07-22 11:12:33.680211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.646 [2024-07-22 11:12:33.680270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.646 [2024-07-22 11:12:33.680294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:45200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.646 [2024-07-22 11:12:33.680308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.646 [2024-07-22 11:12:33.680322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:45208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.646 [2024-07-22 11:12:33.680334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.646 [2024-07-22 11:12:33.680348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:45216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.646 [2024-07-22 11:12:33.680361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.646 [2024-07-22 11:12:33.680375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:45224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.646 [2024-07-22 11:12:33.680387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.646 [2024-07-22 11:12:33.680400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:45232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.646 [2024-07-22 11:12:33.680413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.646 [2024-07-22 11:12:33.680426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:45240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.646 [2024-07-22 11:12:33.680438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.646 [2024-07-22 11:12:33.680452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.646 [2024-07-22 11:12:33.680464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.646 [2024-07-22 11:12:33.680478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.646 [2024-07-22 11:12:33.680490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.646 [2024-07-22 11:12:33.680504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.646 [2024-07-22 11:12:33.680516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.646 [2024-07-22 11:12:33.680530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.646 [2024-07-22 11:12:33.680542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.646 [2024-07-22 11:12:33.680555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.646 [2024-07-22 11:12:33.680568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.646 [2024-07-22 11:12:33.680581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.646 [2024-07-22 11:12:33.680614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.646 [2024-07-22 11:12:33.680628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.646 [2024-07-22 11:12:33.680640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.646 [2024-07-22 11:12:33.680654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.646 [2024-07-22 11:12:33.680666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.646 [2024-07-22 11:12:33.680679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.646 [2024-07-22 11:12:33.680691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.646 [2024-07-22 11:12:33.680705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.646 [2024-07-22 11:12:33.680718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.646 [2024-07-22 11:12:33.680733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.646 [2024-07-22 11:12:33.680746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.646 [2024-07-22 11:12:33.680759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.646 [2024-07-22 11:12:33.680772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.646 [2024-07-22 11:12:33.680786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.646 [2024-07-22 11:12:33.680798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.646 [2024-07-22 11:12:33.680812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.646 [2024-07-22 11:12:33.680824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.646 [2024-07-22 11:12:33.680838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.646 [2024-07-22 11:12:33.680860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.646 [2024-07-22 11:12:33.680874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.646 [2024-07-22 11:12:33.680886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.646 [2024-07-22 11:12:33.680900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.646 [2024-07-22 11:12:33.680912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.646 [2024-07-22 11:12:33.680925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.646 [2024-07-22 11:12:33.680937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.646 [2024-07-22 11:12:33.680956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.646 [2024-07-22 11:12:33.680968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.680982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.647 [2024-07-22 11:12:33.680994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.647 [2024-07-22 11:12:33.681019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.647 [2024-07-22 11:12:33.681044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.647 [2024-07-22 11:12:33.681070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.647 [2024-07-22 11:12:33.681096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:45256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.647 [2024-07-22 11:12:33.681121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:45264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.647 [2024-07-22 11:12:33.681147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:45272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.647 [2024-07-22 11:12:33.681173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:45280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.647 [2024-07-22 11:12:33.681199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.647 [2024-07-22 11:12:33.681224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.647 [2024-07-22 11:12:33.681260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:45304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.647 [2024-07-22 11:12:33.681286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:45312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.647 [2024-07-22 11:12:33.681317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.647 [2024-07-22 11:12:33.681343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.647 [2024-07-22 11:12:33.681369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.647 [2024-07-22 11:12:33.681394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.647 [2024-07-22 11:12:33.681420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.647 [2024-07-22 11:12:33.681445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.647 [2024-07-22 11:12:33.681470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.647 [2024-07-22 11:12:33.681496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.647 [2024-07-22 11:12:33.681522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.647 [2024-07-22 11:12:33.681547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.647 [2024-07-22 11:12:33.681573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.647 [2024-07-22 11:12:33.681598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.647 [2024-07-22 11:12:33.681629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.647 [2024-07-22 11:12:33.681655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.647 [2024-07-22 11:12:33.681681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.647 [2024-07-22 11:12:33.681706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.647 [2024-07-22 11:12:33.681732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:45320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.647 [2024-07-22 11:12:33.681757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:45328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.647 [2024-07-22 11:12:33.681783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:45336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.647 [2024-07-22 11:12:33.681808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:45344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.647 [2024-07-22 11:12:33.681834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:45352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.647 [2024-07-22 11:12:33.681868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:45360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.647 [2024-07-22 11:12:33.681894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:45368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.647 [2024-07-22 11:12:33.681920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:45376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.647 [2024-07-22 11:12:33.681946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:45384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.647 [2024-07-22 11:12:33.681977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.681991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:45392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.647 [2024-07-22 11:12:33.682003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.682016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:45400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.647 [2024-07-22 11:12:33.682028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.682042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:45408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.647 [2024-07-22 11:12:33.682054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.682068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:45416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.647 [2024-07-22 11:12:33.682080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.647 [2024-07-22 11:12:33.682094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:45424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.647 [2024-07-22 11:12:33.682106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.682119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:45432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.648 [2024-07-22 11:12:33.682132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.682146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:45440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.648 [2024-07-22 11:12:33.682158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.682171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:45448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.648 [2024-07-22 11:12:33.682183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.682197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:45456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.648 [2024-07-22 11:12:33.682209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.682222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:45464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.648 [2024-07-22 11:12:33.682234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.682249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:45472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.648 [2024-07-22 11:12:33.682262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.682275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:45480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.648 [2024-07-22 11:12:33.682292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.682306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:45488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.648 [2024-07-22 11:12:33.682318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.682332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:45496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.648 [2024-07-22 11:12:33.682345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.682358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:45504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.648 [2024-07-22 11:12:33.682371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.682384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.648 [2024-07-22 11:12:33.682397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.682410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.648 [2024-07-22 11:12:33.682423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.682436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.648 [2024-07-22 11:12:33.682448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.682462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.648 [2024-07-22 11:12:33.682475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.682488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.648 [2024-07-22 11:12:33.682500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.682514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.648 [2024-07-22 11:12:33.682526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.682540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.648 [2024-07-22 11:12:33.682552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.682566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.648 [2024-07-22 11:12:33.682578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.682591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.648 [2024-07-22 11:12:33.682603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.682617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:45512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.648 [2024-07-22 11:12:33.682633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.682647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:45520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.648 [2024-07-22 11:12:33.682659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.682672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:45528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.648 [2024-07-22 11:12:33.682685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.682698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.648 [2024-07-22 11:12:33.682710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.682724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:45544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.648 [2024-07-22 11:12:33.682736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.682750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:45552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.648 [2024-07-22 11:12:33.682762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.682775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:45560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.648 [2024-07-22 11:12:33.682791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.682805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:45568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.648 [2024-07-22 11:12:33.682818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.682831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:45576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.648 [2024-07-22 11:12:33.682843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.682867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:45584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.648 [2024-07-22 11:12:33.682880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.682894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:45592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.648 [2024-07-22 11:12:33.682906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.682919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:45600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.648 [2024-07-22 11:12:33.682932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.682945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:45608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.648 [2024-07-22 11:12:33.682958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.682975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:45616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.648 [2024-07-22 11:12:33.682988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.683001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.648 [2024-07-22 11:12:33.683013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.683027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:45632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.648 [2024-07-22 11:12:33.683039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.683052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.648 [2024-07-22 11:12:33.683065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.683078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.648 [2024-07-22 11:12:33.683090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.683104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.648 [2024-07-22 11:12:33.683116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.683129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.648 [2024-07-22 11:12:33.683142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.683155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.648 [2024-07-22 11:12:33.683168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.683181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.648 [2024-07-22 11:12:33.683193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.683206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:46136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.648 [2024-07-22 11:12:33.683220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.648 [2024-07-22 11:12:33.683234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.648 [2024-07-22 11:12:33.683246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:33.683259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.649 [2024-07-22 11:12:33.683271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:33.683285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:46160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.649 [2024-07-22 11:12:33.683302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:33.683315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:46168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.649 [2024-07-22 11:12:33.683328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:33.683345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:46176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.649 [2024-07-22 11:12:33.683357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:33.683371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:46184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.649 [2024-07-22 11:12:33.683383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:33.683397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:46192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.649 [2024-07-22 11:12:33.683409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:33.683423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.649 [2024-07-22 11:12:33.683435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:33.683449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:46208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.649 [2024-07-22 11:12:33.683461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:33.683474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:45640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.649 [2024-07-22 11:12:33.683487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:33.683500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:45648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.649 [2024-07-22 11:12:33.683513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:33.683526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.649 [2024-07-22 11:12:33.683539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:33.683552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.649 [2024-07-22 11:12:33.683564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:33.683578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:45672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.649 [2024-07-22 11:12:33.683590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:33.683604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:45680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.649 [2024-07-22 11:12:33.683616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:33.683684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:45688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.649 [2024-07-22 11:12:33.683698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:33.683740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:15:39.649 [2024-07-22 11:12:33.683750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:15:39.649 [2024-07-22 11:12:33.683760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45696 len:8 PRP1 0x0 PRP2 0x0 01:15:39.649 [2024-07-22 11:12:33.683773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:33.683823] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20ab9f0 was disconnected and freed. reset controller. 01:15:39.649 [2024-07-22 11:12:33.683838] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 01:15:39.649 [2024-07-22 11:12:33.683890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:15:39.649 [2024-07-22 11:12:33.683905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:33.683922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:15:39.649 [2024-07-22 11:12:33.683934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:33.683947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:15:39.649 [2024-07-22 11:12:33.683959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:33.683972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:15:39.649 [2024-07-22 11:12:33.683984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:33.683996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:15:39.649 [2024-07-22 11:12:33.686721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:15:39.649 [2024-07-22 11:12:33.686758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff68e0 (9): Bad file descriptor 01:15:39.649 [2024-07-22 11:12:33.718633] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 01:15:39.649 [2024-07-22 11:12:38.076647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.649 [2024-07-22 11:12:38.076704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:38.076728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.649 [2024-07-22 11:12:38.076741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:38.076755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:87944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.649 [2024-07-22 11:12:38.076768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:38.076781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.649 [2024-07-22 11:12:38.076811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:38.076825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.649 [2024-07-22 11:12:38.076837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:38.076860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.649 [2024-07-22 11:12:38.076873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:38.076886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.649 [2024-07-22 11:12:38.076899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:38.076913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.649 [2024-07-22 11:12:38.076925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:38.076938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.649 [2024-07-22 11:12:38.076950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:38.076964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.649 [2024-07-22 11:12:38.076976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:38.076989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.649 [2024-07-22 11:12:38.077001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:38.077015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.649 [2024-07-22 11:12:38.077027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:38.077041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.649 [2024-07-22 11:12:38.077053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:38.077066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.649 [2024-07-22 11:12:38.077078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:38.077092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.649 [2024-07-22 11:12:38.077104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:38.077117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.649 [2024-07-22 11:12:38.077129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:38.077143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:87480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.649 [2024-07-22 11:12:38.077161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:38.077177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.649 [2024-07-22 11:12:38.077190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:38.077204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:87496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.649 [2024-07-22 11:12:38.077217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.649 [2024-07-22 11:12:38.077230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.650 [2024-07-22 11:12:38.077243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.077264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:87512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.650 [2024-07-22 11:12:38.077277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.077291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:87520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.650 [2024-07-22 11:12:38.077303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.077317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.650 [2024-07-22 11:12:38.077329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.077343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.650 [2024-07-22 11:12:38.077355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.077369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.650 [2024-07-22 11:12:38.077381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.077395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.650 [2024-07-22 11:12:38.077407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.077420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.650 [2024-07-22 11:12:38.077432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.077446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.650 [2024-07-22 11:12:38.077458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.077471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.650 [2024-07-22 11:12:38.077483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.077502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.650 [2024-07-22 11:12:38.077514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.077528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.650 [2024-07-22 11:12:38.077541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.077554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.650 [2024-07-22 11:12:38.077566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.077580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:87544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.650 [2024-07-22 11:12:38.077592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.077606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:87552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.650 [2024-07-22 11:12:38.077618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.077632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.650 [2024-07-22 11:12:38.077644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.077658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.650 [2024-07-22 11:12:38.077670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.077684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.650 [2024-07-22 11:12:38.077696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.077710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:87584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.650 [2024-07-22 11:12:38.077724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.077737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:87592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.650 [2024-07-22 11:12:38.077749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.077763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:87600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.650 [2024-07-22 11:12:38.077775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.077789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:87608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.650 [2024-07-22 11:12:38.077801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.077815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:87616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.650 [2024-07-22 11:12:38.077834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.077855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:87624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.650 [2024-07-22 11:12:38.077868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.077882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:87632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.650 [2024-07-22 11:12:38.077894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.077908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:87640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.650 [2024-07-22 11:12:38.077920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.077934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.650 [2024-07-22 11:12:38.077946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.077960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:87656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.650 [2024-07-22 11:12:38.077973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.077986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.650 [2024-07-22 11:12:38.077999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.078012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.650 [2024-07-22 11:12:38.078025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.078039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.650 [2024-07-22 11:12:38.078051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.078065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.650 [2024-07-22 11:12:38.078077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.078090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.650 [2024-07-22 11:12:38.078103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.078116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.650 [2024-07-22 11:12:38.078128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.078142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.650 [2024-07-22 11:12:38.078154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.078172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.650 [2024-07-22 11:12:38.078184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.078198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.650 [2024-07-22 11:12:38.078210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.650 [2024-07-22 11:12:38.078224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.650 [2024-07-22 11:12:38.078236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.078250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.651 [2024-07-22 11:12:38.078262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.078275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.651 [2024-07-22 11:12:38.078287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.078301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.651 [2024-07-22 11:12:38.078313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.078326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.651 [2024-07-22 11:12:38.078338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.078352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.651 [2024-07-22 11:12:38.078364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.078378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.651 [2024-07-22 11:12:38.078390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.078404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.651 [2024-07-22 11:12:38.078416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.078430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.651 [2024-07-22 11:12:38.078442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.078456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.651 [2024-07-22 11:12:38.078468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.078482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.651 [2024-07-22 11:12:38.078494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.078511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.651 [2024-07-22 11:12:38.078524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.078538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.651 [2024-07-22 11:12:38.078550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.078563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.651 [2024-07-22 11:12:38.078575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.078589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.651 [2024-07-22 11:12:38.078601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.078614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.651 [2024-07-22 11:12:38.078627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.078640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.651 [2024-07-22 11:12:38.078652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.078665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.651 [2024-07-22 11:12:38.078678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.078691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.651 [2024-07-22 11:12:38.078703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.078717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.651 [2024-07-22 11:12:38.078729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.078743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.651 [2024-07-22 11:12:38.078755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.078770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:87680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.651 [2024-07-22 11:12:38.078782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.078796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:87688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.651 [2024-07-22 11:12:38.078809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.078822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.651 [2024-07-22 11:12:38.078839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.078861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.651 [2024-07-22 11:12:38.078874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.078887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:87712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.651 [2024-07-22 11:12:38.078900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.078914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.651 [2024-07-22 11:12:38.078926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.078939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:87728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.651 [2024-07-22 11:12:38.078952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.078966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:87736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.651 [2024-07-22 11:12:38.078978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.078992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.651 [2024-07-22 11:12:38.079004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.079017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:87752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.651 [2024-07-22 11:12:38.079030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.079043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.651 [2024-07-22 11:12:38.079056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.079069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.651 [2024-07-22 11:12:38.079081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.079095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.651 [2024-07-22 11:12:38.079108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.079121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:87784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.651 [2024-07-22 11:12:38.079133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.079147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.651 [2024-07-22 11:12:38.079160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.079178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.651 [2024-07-22 11:12:38.079190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.079204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.651 [2024-07-22 11:12:38.079216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.079230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.651 [2024-07-22 11:12:38.079246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.079259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.651 [2024-07-22 11:12:38.079272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.079285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.651 [2024-07-22 11:12:38.079297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.079311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:88384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.651 [2024-07-22 11:12:38.079328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.079342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.651 [2024-07-22 11:12:38.079354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.651 [2024-07-22 11:12:38.079368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.651 [2024-07-22 11:12:38.079380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.652 [2024-07-22 11:12:38.079394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:88408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.652 [2024-07-22 11:12:38.079406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.652 [2024-07-22 11:12:38.079420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.652 [2024-07-22 11:12:38.079432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.652 [2024-07-22 11:12:38.079445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.652 [2024-07-22 11:12:38.079458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.652 [2024-07-22 11:12:38.079472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:15:39.652 [2024-07-22 11:12:38.079484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.652 [2024-07-22 11:12:38.079497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:87800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.652 [2024-07-22 11:12:38.079514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.652 [2024-07-22 11:12:38.079528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.652 [2024-07-22 11:12:38.079541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.652 [2024-07-22 11:12:38.079555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:87816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.652 [2024-07-22 11:12:38.079568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.652 [2024-07-22 11:12:38.079582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.652 [2024-07-22 11:12:38.079594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.652 [2024-07-22 11:12:38.079608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.652 [2024-07-22 11:12:38.079620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.652 [2024-07-22 11:12:38.079634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:87840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.652 [2024-07-22 11:12:38.079646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.652 [2024-07-22 11:12:38.079660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:87848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.652 [2024-07-22 11:12:38.079674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.652 [2024-07-22 11:12:38.079687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:87856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.652 [2024-07-22 11:12:38.079700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.652 [2024-07-22 11:12:38.079714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:87864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.652 [2024-07-22 11:12:38.079726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.652 [2024-07-22 11:12:38.079740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.652 [2024-07-22 11:12:38.079753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.652 [2024-07-22 11:12:38.079767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:87880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.652 [2024-07-22 11:12:38.079779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.652 [2024-07-22 11:12:38.079792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.652 [2024-07-22 11:12:38.079805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.652 [2024-07-22 11:12:38.079819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.652 [2024-07-22 11:12:38.079831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.652 [2024-07-22 11:12:38.079855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.652 [2024-07-22 11:12:38.079868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.652 [2024-07-22 11:12:38.079882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:15:39.652 [2024-07-22 11:12:38.079895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.652 [2024-07-22 11:12:38.079908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab6b0 is same with the state(5) to be set 01:15:39.652 [2024-07-22 11:12:38.079923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:15:39.652 [2024-07-22 11:12:38.079932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:15:39.652 [2024-07-22 11:12:38.079942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87920 len:8 PRP1 0x0 PRP2 0x0 01:15:39.652 [2024-07-22 11:12:38.079954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.652 [2024-07-22 11:12:38.079967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:15:39.652 [2024-07-22 11:12:38.079976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:15:39.652 [2024-07-22 11:12:38.079985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88440 len:8 PRP1 0x0 PRP2 0x0 01:15:39.652 [2024-07-22 11:12:38.079997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.652 [2024-07-22 11:12:38.080009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:15:39.652 [2024-07-22 11:12:38.080018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:15:39.652 [2024-07-22 11:12:38.080027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88448 len:8 PRP1 0x0 PRP2 0x0 01:15:39.652 [2024-07-22 11:12:38.080039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.652 [2024-07-22 11:12:38.080051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:15:39.652 [2024-07-22 11:12:38.080060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:15:39.652 [2024-07-22 11:12:38.080071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88456 len:8 PRP1 0x0 PRP2 0x0 01:15:39.652 [2024-07-22 11:12:38.080083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.652 [2024-07-22 11:12:38.080095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:15:39.652 [2024-07-22 11:12:38.080104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:15:39.652 [2024-07-22 11:12:38.080113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88464 len:8 PRP1 0x0 PRP2 0x0 01:15:39.652 [2024-07-22 11:12:38.080125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.652 [2024-07-22 11:12:38.080138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:15:39.652 [2024-07-22 11:12:38.080147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:15:39.652 [2024-07-22 11:12:38.080156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88472 len:8 PRP1 0x0 PRP2 0x0 01:15:39.652 [2024-07-22 11:12:38.080168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.652 [2024-07-22 11:12:38.080181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:15:39.652 [2024-07-22 11:12:38.080194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:15:39.652 [2024-07-22 11:12:38.080203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88480 len:8 PRP1 0x0 PRP2 0x0 01:15:39.652 [2024-07-22 11:12:38.080215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.652 [2024-07-22 11:12:38.080227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:15:39.652 [2024-07-22 11:12:38.080236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:15:39.652 [2024-07-22 11:12:38.080246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88488 len:8 PRP1 0x0 PRP2 0x0 01:15:39.652 [2024-07-22 11:12:38.080258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.652 [2024-07-22 11:12:38.080270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:15:39.652 [2024-07-22 11:12:38.080278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:15:39.652 [2024-07-22 11:12:38.080288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88496 len:8 PRP1 0x0 PRP2 0x0 01:15:39.652 [2024-07-22 11:12:38.080300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.652 [2024-07-22 11:12:38.080347] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20ab6b0 was disconnected and freed. reset controller. 01:15:39.652 [2024-07-22 11:12:38.080363] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 01:15:39.652 [2024-07-22 11:12:38.080407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:15:39.652 [2024-07-22 11:12:38.080421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.652 [2024-07-22 11:12:38.080435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:15:39.652 [2024-07-22 11:12:38.080447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.652 [2024-07-22 11:12:38.080459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:15:39.652 [2024-07-22 11:12:38.080471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.652 [2024-07-22 11:12:38.080484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:15:39.652 [2024-07-22 11:12:38.080496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:39.652 [2024-07-22 11:12:38.080510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:15:39.652 [2024-07-22 11:12:38.083232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:15:39.652 [2024-07-22 11:12:38.083269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff68e0 (9): Bad file descriptor 01:15:39.652 [2024-07-22 11:12:38.116421] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 01:15:39.652 01:15:39.652 Latency(us) 01:15:39.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:15:39.652 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:15:39.652 Verification LBA range: start 0x0 length 0x4000 01:15:39.652 NVMe0n1 : 15.01 11840.91 46.25 281.79 0.00 10536.11 437.56 24951.06 01:15:39.653 =================================================================================================================== 01:15:39.653 Total : 11840.91 46.25 281.79 0.00 10536.11 437.56 24951.06 01:15:39.653 Received shutdown signal, test time was about 15.000000 seconds 01:15:39.653 01:15:39.653 Latency(us) 01:15:39.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:15:39.653 =================================================================================================================== 01:15:39.653 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:15:39.653 11:12:44 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 01:15:39.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:15:39.653 11:12:44 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 01:15:39.653 11:12:44 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 01:15:39.653 11:12:44 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=90751 01:15:39.653 11:12:44 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 90751 /var/tmp/bdevperf.sock 01:15:39.653 11:12:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 90751 ']' 01:15:39.653 11:12:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:15:39.653 11:12:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 01:15:39.653 11:12:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:15:39.653 11:12:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 01:15:39.653 11:12:44 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 01:15:39.653 11:12:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:15:40.219 11:12:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:15:40.219 11:12:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 01:15:40.219 11:12:45 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 01:15:40.477 [2024-07-22 11:12:45.556137] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 01:15:40.477 11:12:45 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 01:15:40.735 [2024-07-22 11:12:45.752101] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 01:15:40.735 11:12:45 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:15:40.993 NVMe0n1 01:15:40.993 11:12:46 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:15:41.251 01:15:41.251 11:12:46 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:15:41.508 01:15:41.508 11:12:46 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:15:41.508 11:12:46 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 01:15:41.766 11:12:46 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:15:41.766 11:12:46 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 01:15:45.052 11:12:49 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:15:45.052 11:12:49 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 01:15:45.052 11:12:50 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:15:45.052 11:12:50 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=90823 01:15:45.052 11:12:50 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 90823 01:15:46.427 0 01:15:46.427 11:12:51 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:15:46.427 [2024-07-22 11:12:44.532499] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:15:46.427 [2024-07-22 11:12:44.532642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90751 ] 01:15:46.427 [2024-07-22 11:12:44.672790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:46.427 [2024-07-22 11:12:44.749421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:15:46.427 [2024-07-22 11:12:44.824545] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:15:46.427 [2024-07-22 11:12:46.899246] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 01:15:46.427 [2024-07-22 11:12:46.899408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:15:46.427 [2024-07-22 11:12:46.899431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:46.427 [2024-07-22 11:12:46.899449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:15:46.427 [2024-07-22 11:12:46.899462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:46.427 [2024-07-22 11:12:46.899477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:15:46.427 [2024-07-22 11:12:46.899491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:46.427 [2024-07-22 11:12:46.899505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:15:46.427 [2024-07-22 11:12:46.899518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:46.427 [2024-07-22 11:12:46.899532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:15:46.427 [2024-07-22 11:12:46.899586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:15:46.427 [2024-07-22 11:12:46.899615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca48e0 (9): Bad file descriptor 01:15:46.427 [2024-07-22 11:12:46.909398] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 01:15:46.427 Running I/O for 1 seconds... 01:15:46.427 01:15:46.427 Latency(us) 01:15:46.427 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:15:46.427 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:15:46.427 Verification LBA range: start 0x0 length 0x4000 01:15:46.427 NVMe0n1 : 1.01 10755.20 42.01 0.00 0.00 11839.09 1105.43 12159.69 01:15:46.427 =================================================================================================================== 01:15:46.427 Total : 10755.20 42.01 0.00 0.00 11839.09 1105.43 12159.69 01:15:46.427 11:12:51 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:15:46.427 11:12:51 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 01:15:46.427 11:12:51 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:15:46.687 11:12:51 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:15:46.687 11:12:51 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 01:15:46.687 11:12:51 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:15:46.945 11:12:52 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 01:15:50.236 11:12:55 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 01:15:50.236 11:12:55 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:15:50.236 11:12:55 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 90751 01:15:50.236 11:12:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 90751 ']' 01:15:50.236 11:12:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 90751 01:15:50.236 11:12:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 01:15:50.236 11:12:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:15:50.236 11:12:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90751 01:15:50.236 11:12:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:15:50.236 11:12:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:15:50.236 killing process with pid 90751 01:15:50.236 11:12:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90751' 01:15:50.236 11:12:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 90751 01:15:50.236 11:12:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 90751 01:15:50.493 11:12:55 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 01:15:50.493 11:12:55 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:15:50.751 11:12:55 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 01:15:50.751 11:12:55 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:15:50.751 11:12:55 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 01:15:50.751 11:12:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 01:15:50.751 11:12:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 01:15:50.751 11:12:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:15:50.751 11:12:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 01:15:50.751 11:12:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 01:15:50.751 11:12:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:15:50.751 rmmod nvme_tcp 01:15:50.751 rmmod nvme_fabrics 01:15:50.751 rmmod nvme_keyring 01:15:50.751 11:12:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:15:50.751 11:12:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 01:15:50.751 11:12:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 01:15:50.751 11:12:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 90498 ']' 01:15:50.751 11:12:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 90498 01:15:50.751 11:12:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 90498 ']' 01:15:50.751 11:12:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 90498 01:15:50.751 11:12:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 01:15:50.751 11:12:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:15:50.751 11:12:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90498 01:15:50.751 11:12:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:15:50.751 11:12:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:15:50.751 11:12:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90498' 01:15:50.751 killing process with pid 90498 01:15:50.751 11:12:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 90498 01:15:50.751 11:12:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 90498 01:15:51.009 11:12:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:15:51.009 11:12:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:15:51.009 11:12:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:15:51.010 11:12:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:15:51.010 11:12:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 01:15:51.010 11:12:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:15:51.010 11:12:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:15:51.010 11:12:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:15:51.010 11:12:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:15:51.010 01:15:51.010 real 0m31.351s 01:15:51.010 user 1m58.923s 01:15:51.010 sys 0m6.516s 01:15:51.010 11:12:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 01:15:51.010 11:12:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:15:51.010 ************************************ 01:15:51.010 END TEST nvmf_failover 01:15:51.010 ************************************ 01:15:51.269 11:12:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:15:51.270 11:12:56 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 01:15:51.270 11:12:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:15:51.270 11:12:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:15:51.270 11:12:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:15:51.270 ************************************ 01:15:51.270 START TEST nvmf_host_discovery 01:15:51.270 ************************************ 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 01:15:51.270 * Looking for test storage... 01:15:51.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:15:51.270 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:15:51.529 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:15:51.529 Cannot find device "nvmf_tgt_br" 01:15:51.529 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 01:15:51.529 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:15:51.529 Cannot find device "nvmf_tgt_br2" 01:15:51.529 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 01:15:51.529 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:15:51.529 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:15:51.529 Cannot find device "nvmf_tgt_br" 01:15:51.529 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 01:15:51.529 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:15:51.529 Cannot find device "nvmf_tgt_br2" 01:15:51.529 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 01:15:51.529 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:15:51.529 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:15:51.529 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:15:51.529 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:15:51.529 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 01:15:51.529 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:15:51.529 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:15:51.529 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 01:15:51.529 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:15:51.529 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:15:51.529 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:15:51.529 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:15:51.529 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:15:51.529 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:15:51.529 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:15:51.529 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:15:51.529 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:15:51.529 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:15:51.529 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:15:51.529 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:15:51.529 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:15:51.529 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:15:51.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:15:51.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 01:15:51.801 01:15:51.801 --- 10.0.0.2 ping statistics --- 01:15:51.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:15:51.801 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:15:51.801 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:15:51.801 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 01:15:51.801 01:15:51.801 --- 10.0.0.3 ping statistics --- 01:15:51.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:15:51.801 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:15:51.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:15:51.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 01:15:51.801 01:15:51.801 --- 10.0.0.1 ping statistics --- 01:15:51.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:15:51.801 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=91093 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 91093 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 91093 ']' 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:51.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 01:15:51.801 11:12:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:51.801 [2024-07-22 11:12:56.958513] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:15:51.801 [2024-07-22 11:12:56.958591] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:15:52.060 [2024-07-22 11:12:57.102450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:52.060 [2024-07-22 11:12:57.145092] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:15:52.060 [2024-07-22 11:12:57.145141] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:15:52.060 [2024-07-22 11:12:57.145151] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:15:52.060 [2024-07-22 11:12:57.145159] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:15:52.060 [2024-07-22 11:12:57.145165] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:15:52.060 [2024-07-22 11:12:57.145195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:15:52.060 [2024-07-22 11:12:57.186228] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:15:52.628 11:12:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:15:52.628 11:12:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 01:15:52.628 11:12:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:15:52.628 11:12:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 01:15:52.628 11:12:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:52.886 11:12:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:15:52.886 11:12:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:15:52.886 11:12:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:52.886 11:12:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:52.886 [2024-07-22 11:12:57.886434] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:15:52.886 11:12:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:52.886 11:12:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 01:15:52.886 11:12:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:52.886 11:12:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:52.886 [2024-07-22 11:12:57.898533] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 01:15:52.886 11:12:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:52.886 11:12:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 01:15:52.886 11:12:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:52.886 11:12:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:52.886 null0 01:15:52.886 11:12:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:52.886 11:12:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 01:15:52.886 11:12:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:52.886 11:12:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:52.886 null1 01:15:52.886 11:12:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:52.886 11:12:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 01:15:52.887 11:12:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:52.887 11:12:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:52.887 11:12:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:52.887 11:12:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=91125 01:15:52.887 11:12:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 01:15:52.887 11:12:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 91125 /tmp/host.sock 01:15:52.887 11:12:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 91125 ']' 01:15:52.887 11:12:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 01:15:52.887 11:12:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 01:15:52.887 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 01:15:52.887 11:12:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 01:15:52.887 11:12:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 01:15:52.887 11:12:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:52.887 [2024-07-22 11:12:57.990977] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:15:52.887 [2024-07-22 11:12:57.991050] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91125 ] 01:15:53.145 [2024-07-22 11:12:58.133984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:53.145 [2024-07-22 11:12:58.177644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:15:53.145 [2024-07-22 11:12:58.219246] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:15:53.711 11:12:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:15:53.711 11:12:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 01:15:53.711 11:12:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:15:53.711 11:12:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 01:15:53.711 11:12:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.711 11:12:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:53.711 11:12:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.711 11:12:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 01:15:53.711 11:12:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.711 11:12:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:53.711 11:12:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.711 11:12:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 01:15:53.711 11:12:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 01:15:53.711 11:12:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:15:53.711 11:12:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:15:53.711 11:12:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.711 11:12:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:53.711 11:12:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:15:53.711 11:12:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:15:53.711 11:12:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.711 11:12:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 01:15:53.711 11:12:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 01:15:53.711 11:12:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:15:53.711 11:12:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:15:53.711 11:12:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:15:53.711 11:12:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:15:53.711 11:12:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.711 11:12:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:53.711 11:12:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.969 11:12:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 01:15:53.969 11:12:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 01:15:53.969 11:12:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.969 11:12:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:53.969 11:12:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.969 11:12:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 01:15:53.969 11:12:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:15:53.970 11:12:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:15:53.970 11:12:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.970 11:12:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:53.970 11:12:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:15:53.970 11:12:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:15:53.970 11:12:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.970 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:53.970 [2024-07-22 11:12:59.176771] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 01:15:54.228 11:12:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 01:15:54.795 [2024-07-22 11:12:59.836014] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 01:15:54.795 [2024-07-22 11:12:59.836063] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 01:15:54.795 [2024-07-22 11:12:59.836079] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:15:54.795 [2024-07-22 11:12:59.842050] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 01:15:54.795 [2024-07-22 11:12:59.898837] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 01:15:54.795 [2024-07-22 11:12:59.898879] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 01:15:55.362 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:15:55.363 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 01:15:55.363 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:55.363 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 01:15:55.363 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:55.363 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 01:15:55.363 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:55.363 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 01:15:55.363 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:15:55.363 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 01:15:55.363 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 01:15:55.363 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:15:55.363 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:15:55.363 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:15:55.363 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:15:55.363 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:15:55.363 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 01:15:55.363 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 01:15:55.363 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:15:55.363 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:55.363 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:55.363 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:55.621 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:55.622 [2024-07-22 11:13:00.713093] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 01:15:55.622 [2024-07-22 11:13:00.714212] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 01:15:55.622 [2024-07-22 11:13:00.714239] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 01:15:55.622 [2024-07-22 11:13:00.720199] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:55.622 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:15:55.622 [2024-07-22 11:13:00.778341] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 01:15:55.622 [2024-07-22 11:13:00.778364] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 01:15:55.622 [2024-07-22 11:13:00.778371] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:55.879 [2024-07-22 11:13:00.945499] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 01:15:55.879 [2024-07-22 11:13:00.945642] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 01:15:55.879 [2024-07-22 11:13:00.951489] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 01:15:55.879 [2024-07-22 11:13:00.951511] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 01:15:55.879 [2024-07-22 11:13:00.951608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:15:55.879 [2024-07-22 11:13:00.951640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:55.879 [2024-07-22 11:13:00.951652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:15:55.879 [2024-07-22 11:13:00.951661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:55.879 [2024-07-22 11:13:00.951670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:15:55.879 [2024-07-22 11:13:00.951679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:55.879 [2024-07-22 11:13:00.951688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:15:55.879 [2024-07-22 11:13:00.951697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:15:55.879 [2024-07-22 11:13:00.951706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1336470 is same with the state(5) to be set 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:55.879 11:13:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:15:55.879 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:55.879 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:15:55.879 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:15:55.879 11:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 01:15:55.879 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 01:15:55.879 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:15:55.879 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:15:55.879 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 01:15:55.879 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 01:15:55.879 11:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 01:15:55.879 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:55.879 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:55.879 11:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:15:55.879 11:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 01:15:55.879 11:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 01:15:55.879 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:15:56.137 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:56.138 11:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:57.511 [2024-07-22 11:13:02.332407] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 01:15:57.511 [2024-07-22 11:13:02.332443] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 01:15:57.511 [2024-07-22 11:13:02.332456] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:15:57.511 [2024-07-22 11:13:02.338422] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 01:15:57.511 [2024-07-22 11:13:02.398432] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 01:15:57.511 [2024-07-22 11:13:02.398480] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:57.511 request: 01:15:57.511 { 01:15:57.511 "name": "nvme", 01:15:57.511 "trtype": "tcp", 01:15:57.511 "traddr": "10.0.0.2", 01:15:57.511 "adrfam": "ipv4", 01:15:57.511 "trsvcid": "8009", 01:15:57.511 "hostnqn": "nqn.2021-12.io.spdk:test", 01:15:57.511 "wait_for_attach": true, 01:15:57.511 "method": "bdev_nvme_start_discovery", 01:15:57.511 "req_id": 1 01:15:57.511 } 01:15:57.511 Got JSON-RPC error response 01:15:57.511 response: 01:15:57.511 { 01:15:57.511 "code": -17, 01:15:57.511 "message": "File exists" 01:15:57.511 } 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:57.511 request: 01:15:57.511 { 01:15:57.511 "name": "nvme_second", 01:15:57.511 "trtype": "tcp", 01:15:57.511 "traddr": "10.0.0.2", 01:15:57.511 "adrfam": "ipv4", 01:15:57.511 "trsvcid": "8009", 01:15:57.511 "hostnqn": "nqn.2021-12.io.spdk:test", 01:15:57.511 "wait_for_attach": true, 01:15:57.511 "method": "bdev_nvme_start_discovery", 01:15:57.511 "req_id": 1 01:15:57.511 } 01:15:57.511 Got JSON-RPC error response 01:15:57.511 response: 01:15:57.511 { 01:15:57.511 "code": -17, 01:15:57.511 "message": "File exists" 01:15:57.511 } 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 01:15:57.511 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:57.512 11:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 01:15:57.512 11:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 01:15:57.512 11:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:15:57.512 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:57.512 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:57.512 11:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:15:57.512 11:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:15:57.512 11:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:15:57.512 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:57.512 11:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:15:57.512 11:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 01:15:57.512 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 01:15:57.512 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 01:15:57.512 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:15:57.512 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:15:57.512 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:15:57.512 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:15:57.512 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 01:15:57.512 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:57.512 11:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:15:58.892 [2024-07-22 11:13:03.689527] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:15:58.892 [2024-07-22 11:13:03.689581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13351a0 with addr=10.0.0.2, port=8010 01:15:58.892 [2024-07-22 11:13:03.689603] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 01:15:58.892 [2024-07-22 11:13:03.689615] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:15:58.892 [2024-07-22 11:13:03.689624] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 01:15:59.828 [2024-07-22 11:13:04.687903] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:15:59.828 [2024-07-22 11:13:04.687952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1353e80 with addr=10.0.0.2, port=8010 01:15:59.828 [2024-07-22 11:13:04.687974] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 01:15:59.828 [2024-07-22 11:13:04.687983] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:15:59.828 [2024-07-22 11:13:04.687992] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 01:16:00.763 [2024-07-22 11:13:05.686166] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 01:16:00.763 request: 01:16:00.763 { 01:16:00.763 "name": "nvme_second", 01:16:00.763 "trtype": "tcp", 01:16:00.763 "traddr": "10.0.0.2", 01:16:00.763 "adrfam": "ipv4", 01:16:00.763 "trsvcid": "8010", 01:16:00.763 "hostnqn": "nqn.2021-12.io.spdk:test", 01:16:00.763 "wait_for_attach": false, 01:16:00.763 "attach_timeout_ms": 3000, 01:16:00.763 "method": "bdev_nvme_start_discovery", 01:16:00.763 "req_id": 1 01:16:00.763 } 01:16:00.763 Got JSON-RPC error response 01:16:00.763 response: 01:16:00.763 { 01:16:00.763 "code": -110, 01:16:00.763 "message": "Connection timed out" 01:16:00.763 } 01:16:00.763 11:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:16:00.763 11:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 01:16:00.763 11:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:16:00.763 11:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:16:00.763 11:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:16:00.764 11:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 01:16:00.764 11:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:16:00.764 11:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 01:16:00.764 11:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:00.764 11:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:16:00.764 11:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 01:16:00.764 11:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 01:16:00.764 11:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:00.764 11:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 01:16:00.764 11:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 01:16:00.764 11:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 91125 01:16:00.764 11:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 01:16:00.764 11:13:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 01:16:00.764 11:13:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 01:16:00.764 11:13:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:16:00.764 11:13:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 01:16:00.764 11:13:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 01:16:00.764 11:13:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:16:00.764 rmmod nvme_tcp 01:16:00.764 rmmod nvme_fabrics 01:16:00.764 rmmod nvme_keyring 01:16:00.764 11:13:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:16:00.764 11:13:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 01:16:00.764 11:13:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 01:16:00.764 11:13:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 91093 ']' 01:16:00.764 11:13:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 91093 01:16:00.764 11:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 91093 ']' 01:16:00.764 11:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 91093 01:16:00.764 11:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 01:16:00.764 11:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:16:00.764 11:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91093 01:16:00.764 killing process with pid 91093 01:16:00.764 11:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:16:00.764 11:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:16:00.764 11:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91093' 01:16:00.764 11:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 91093 01:16:00.764 11:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 91093 01:16:01.022 11:13:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:16:01.022 11:13:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:16:01.022 11:13:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:16:01.022 11:13:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:16:01.022 11:13:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 01:16:01.022 11:13:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:16:01.022 11:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:16:01.023 11:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:16:01.023 11:13:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:16:01.023 01:16:01.023 real 0m9.897s 01:16:01.023 user 0m18.146s 01:16:01.023 sys 0m2.565s 01:16:01.023 11:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 01:16:01.023 ************************************ 01:16:01.023 END TEST nvmf_host_discovery 01:16:01.023 ************************************ 01:16:01.023 11:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:16:01.023 11:13:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:16:01.023 11:13:06 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 01:16:01.023 11:13:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:16:01.023 11:13:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:16:01.023 11:13:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:16:01.282 ************************************ 01:16:01.282 START TEST nvmf_host_multipath_status 01:16:01.282 ************************************ 01:16:01.282 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 01:16:01.282 * Looking for test storage... 01:16:01.282 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:16:01.282 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:16:01.282 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 01:16:01.282 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:16:01.282 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:16:01.282 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:16:01.282 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:16:01.282 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:16:01.282 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:16:01.282 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:16:01.282 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:16:01.282 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:16:01.282 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:16:01.282 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:16:01.282 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:16:01.282 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:16:01.282 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:16:01.282 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:16:01.282 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:16:01.282 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:16:01.282 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:16:01.282 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:16:01.282 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:16:01.282 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:16:01.283 Cannot find device "nvmf_tgt_br" 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:16:01.283 Cannot find device "nvmf_tgt_br2" 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:16:01.283 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:16:01.542 Cannot find device "nvmf_tgt_br" 01:16:01.542 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 01:16:01.542 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:16:01.542 Cannot find device "nvmf_tgt_br2" 01:16:01.542 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 01:16:01.542 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:16:01.542 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:16:01.542 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:16:01.542 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:16:01.542 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 01:16:01.542 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:16:01.542 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:16:01.542 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 01:16:01.542 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:16:01.542 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:16:01.542 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:16:01.542 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:16:01.542 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:16:01.542 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:16:01.542 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:16:01.542 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:16:01.542 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:16:01.542 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:16:01.542 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:16:01.542 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:16:01.542 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:16:01.542 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:16:01.542 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:16:01.542 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:16:01.542 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:16:01.542 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:16:01.542 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:16:01.801 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:16:01.801 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:16:01.801 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:16:01.801 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:16:01.801 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:16:01.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:16:01.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 01:16:01.802 01:16:01.802 --- 10.0.0.2 ping statistics --- 01:16:01.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:16:01.802 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 01:16:01.802 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:16:01.802 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:16:01.802 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 01:16:01.802 01:16:01.802 --- 10.0.0.3 ping statistics --- 01:16:01.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:16:01.802 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 01:16:01.802 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:16:01.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:16:01.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 01:16:01.802 01:16:01.802 --- 10.0.0.1 ping statistics --- 01:16:01.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:16:01.802 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 01:16:01.802 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:16:01.802 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 01:16:01.802 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:16:01.802 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:16:01.802 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:16:01.802 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:16:01.802 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:16:01.802 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:16:01.802 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:16:01.802 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 01:16:01.802 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:16:01.802 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 01:16:01.802 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:16:01.802 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=91575 01:16:01.802 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 01:16:01.802 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 91575 01:16:01.802 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 91575 ']' 01:16:01.802 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:16:01.802 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 01:16:01.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:16:01.802 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:16:01.802 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 01:16:01.802 11:13:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:16:01.802 [2024-07-22 11:13:06.934089] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:16:01.802 [2024-07-22 11:13:06.934343] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:16:02.061 [2024-07-22 11:13:07.077654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 01:16:02.061 [2024-07-22 11:13:07.119558] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:16:02.061 [2024-07-22 11:13:07.119810] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:16:02.061 [2024-07-22 11:13:07.119925] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:16:02.061 [2024-07-22 11:13:07.119972] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:16:02.061 [2024-07-22 11:13:07.119999] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:16:02.061 [2024-07-22 11:13:07.120180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:16:02.061 [2024-07-22 11:13:07.120181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:16:02.061 [2024-07-22 11:13:07.161865] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:16:02.646 11:13:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:16:02.646 11:13:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 01:16:02.647 11:13:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:16:02.647 11:13:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 01:16:02.647 11:13:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:16:02.647 11:13:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:16:02.647 11:13:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=91575 01:16:02.647 11:13:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:16:02.905 [2024-07-22 11:13:08.018781] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:16:02.905 11:13:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:16:03.163 Malloc0 01:16:03.163 11:13:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 01:16:03.421 11:13:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:16:03.678 11:13:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:16:03.678 [2024-07-22 11:13:08.817713] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:16:03.678 11:13:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 01:16:03.937 [2024-07-22 11:13:09.026092] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 01:16:03.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:16:03.937 11:13:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=91625 01:16:03.937 11:13:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 01:16:03.937 11:13:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:16:03.937 11:13:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 91625 /var/tmp/bdevperf.sock 01:16:03.937 11:13:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 91625 ']' 01:16:03.937 11:13:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:16:03.937 11:13:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 01:16:03.937 11:13:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:16:03.937 11:13:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 01:16:03.937 11:13:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:16:04.871 11:13:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:16:04.871 11:13:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 01:16:04.871 11:13:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 01:16:05.129 11:13:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 01:16:05.387 Nvme0n1 01:16:05.387 11:13:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 01:16:05.646 Nvme0n1 01:16:05.646 11:13:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 01:16:05.646 11:13:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 01:16:07.549 11:13:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 01:16:07.549 11:13:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 01:16:07.808 11:13:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 01:16:08.066 11:13:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 01:16:09.000 11:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 01:16:09.000 11:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:16:09.000 11:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:16:09.000 11:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:09.259 11:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:09.259 11:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:16:09.259 11:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:09.259 11:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:16:09.517 11:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:16:09.517 11:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:16:09.517 11:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:09.517 11:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:16:09.517 11:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:09.517 11:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:16:09.517 11:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:09.517 11:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:16:09.775 11:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:09.775 11:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:16:09.775 11:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:09.775 11:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:16:10.033 11:13:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:10.033 11:13:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:16:10.033 11:13:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:10.033 11:13:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:16:10.291 11:13:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:10.291 11:13:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 01:16:10.291 11:13:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:16:10.291 11:13:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 01:16:10.552 11:13:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 01:16:11.488 11:13:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 01:16:11.488 11:13:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 01:16:11.488 11:13:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:11.488 11:13:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:16:11.746 11:13:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:16:11.746 11:13:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:16:11.746 11:13:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:11.746 11:13:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:16:12.004 11:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:12.004 11:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:16:12.004 11:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:12.004 11:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:16:12.263 11:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:12.263 11:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:16:12.263 11:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:16:12.263 11:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:12.520 11:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:12.520 11:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:16:12.520 11:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:16:12.520 11:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:12.520 11:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:12.520 11:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:16:12.520 11:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:12.520 11:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:16:12.778 11:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:12.778 11:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 01:16:12.778 11:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:16:13.036 11:13:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 01:16:13.294 11:13:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 01:16:14.268 11:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 01:16:14.268 11:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:16:14.268 11:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:14.268 11:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:16:14.526 11:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:14.526 11:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:16:14.526 11:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:14.526 11:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:16:14.526 11:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:16:14.526 11:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:16:14.526 11:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:14.526 11:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:16:14.785 11:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:14.785 11:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:16:14.785 11:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:14.785 11:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:16:15.043 11:13:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:15.043 11:13:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:16:15.043 11:13:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:16:15.043 11:13:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:15.301 11:13:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:15.301 11:13:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:16:15.301 11:13:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:15.301 11:13:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:16:15.301 11:13:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:15.301 11:13:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 01:16:15.301 11:13:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:16:15.559 11:13:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 01:16:15.816 11:13:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 01:16:16.751 11:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 01:16:16.751 11:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:16:16.751 11:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:16.751 11:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:16:17.009 11:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:17.009 11:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:16:17.009 11:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:16:17.009 11:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:17.268 11:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:16:17.268 11:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:16:17.268 11:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:16:17.268 11:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:17.268 11:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:17.268 11:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:16:17.268 11:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:17.268 11:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:16:17.526 11:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:17.526 11:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:16:17.526 11:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:17.526 11:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:16:17.784 11:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:17.784 11:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 01:16:17.784 11:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:16:17.784 11:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:18.041 11:13:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:16:18.041 11:13:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 01:16:18.041 11:13:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 01:16:18.041 11:13:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 01:16:18.299 11:13:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 01:16:19.232 11:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 01:16:19.232 11:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 01:16:19.232 11:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:19.232 11:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:16:19.489 11:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:16:19.489 11:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:16:19.489 11:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:19.489 11:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:16:19.747 11:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:16:19.747 11:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:16:19.747 11:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:19.747 11:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:16:20.004 11:13:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:20.004 11:13:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:16:20.004 11:13:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:20.004 11:13:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:16:20.261 11:13:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:20.261 11:13:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 01:16:20.261 11:13:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:20.261 11:13:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:16:20.261 11:13:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:16:20.261 11:13:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 01:16:20.261 11:13:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:20.261 11:13:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:16:20.518 11:13:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:16:20.518 11:13:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 01:16:20.518 11:13:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 01:16:20.775 11:13:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 01:16:21.032 11:13:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 01:16:22.019 11:13:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 01:16:22.019 11:13:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 01:16:22.019 11:13:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:22.019 11:13:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:16:22.019 11:13:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:16:22.019 11:13:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:16:22.019 11:13:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:22.019 11:13:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:16:22.276 11:13:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:22.276 11:13:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:16:22.276 11:13:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:22.276 11:13:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:16:22.534 11:13:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:22.534 11:13:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:16:22.534 11:13:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:16:22.534 11:13:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:22.795 11:13:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:22.795 11:13:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 01:16:22.795 11:13:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:22.795 11:13:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:16:22.795 11:13:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:16:22.795 11:13:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:16:22.795 11:13:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:22.795 11:13:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:16:23.054 11:13:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:23.054 11:13:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 01:16:23.312 11:13:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 01:16:23.312 11:13:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 01:16:23.571 11:13:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 01:16:23.571 11:13:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 01:16:24.953 11:13:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 01:16:24.953 11:13:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:16:24.953 11:13:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:24.953 11:13:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:16:24.953 11:13:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:24.953 11:13:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:16:24.953 11:13:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:24.953 11:13:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:16:24.953 11:13:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:24.953 11:13:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:16:24.953 11:13:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:24.953 11:13:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:16:25.234 11:13:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:25.234 11:13:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:16:25.234 11:13:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:25.234 11:13:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:16:25.492 11:13:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:25.492 11:13:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:16:25.492 11:13:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:25.492 11:13:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:16:25.750 11:13:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:25.750 11:13:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:16:25.750 11:13:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:16:25.750 11:13:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:25.750 11:13:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:25.750 11:13:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 01:16:25.750 11:13:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:16:26.008 11:13:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 01:16:26.266 11:13:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 01:16:27.201 11:13:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 01:16:27.201 11:13:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 01:16:27.201 11:13:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:27.201 11:13:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:16:27.557 11:13:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:16:27.557 11:13:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:16:27.557 11:13:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:27.557 11:13:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:16:27.557 11:13:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:27.558 11:13:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:16:27.558 11:13:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:27.558 11:13:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:16:27.817 11:13:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:27.817 11:13:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:16:27.817 11:13:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:27.817 11:13:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:16:28.074 11:13:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:28.074 11:13:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:16:28.074 11:13:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:28.074 11:13:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:16:28.332 11:13:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:28.333 11:13:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:16:28.333 11:13:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:16:28.333 11:13:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:28.333 11:13:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:28.333 11:13:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 01:16:28.333 11:13:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:16:28.590 11:13:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 01:16:28.848 11:13:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 01:16:29.783 11:13:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 01:16:29.783 11:13:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:16:29.783 11:13:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:29.783 11:13:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:16:30.040 11:13:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:30.040 11:13:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:16:30.040 11:13:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:30.040 11:13:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:16:30.299 11:13:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:30.299 11:13:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:16:30.299 11:13:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:30.299 11:13:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:16:30.299 11:13:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:30.299 11:13:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:16:30.299 11:13:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:30.299 11:13:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:16:30.557 11:13:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:30.557 11:13:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:16:30.557 11:13:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:30.557 11:13:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:16:30.815 11:13:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:30.815 11:13:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:16:30.815 11:13:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:30.816 11:13:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:16:31.074 11:13:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:31.074 11:13:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 01:16:31.074 11:13:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:16:31.074 11:13:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 01:16:31.331 11:13:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 01:16:32.266 11:13:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 01:16:32.266 11:13:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:16:32.266 11:13:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:32.266 11:13:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:16:32.532 11:13:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:32.532 11:13:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:16:32.532 11:13:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:32.532 11:13:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:16:32.789 11:13:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:16:32.789 11:13:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:16:32.789 11:13:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:16:32.789 11:13:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:33.046 11:13:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:33.046 11:13:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:16:33.046 11:13:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:33.046 11:13:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:16:33.046 11:13:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:33.046 11:13:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:16:33.046 11:13:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:16:33.046 11:13:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:33.304 11:13:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:16:33.304 11:13:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 01:16:33.304 11:13:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:16:33.304 11:13:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:16:33.561 11:13:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:16:33.561 11:13:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 91625 01:16:33.561 11:13:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 91625 ']' 01:16:33.561 11:13:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 91625 01:16:33.561 11:13:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 01:16:33.561 11:13:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:16:33.561 11:13:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91625 01:16:33.561 killing process with pid 91625 01:16:33.561 11:13:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:16:33.561 11:13:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:16:33.561 11:13:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91625' 01:16:33.561 11:13:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 91625 01:16:33.561 11:13:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 91625 01:16:33.822 Connection closed with partial response: 01:16:33.822 01:16:33.822 01:16:33.822 11:13:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 91625 01:16:33.822 11:13:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:16:33.822 [2024-07-22 11:13:09.095121] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:16:33.822 [2024-07-22 11:13:09.095203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91625 ] 01:16:33.822 [2024-07-22 11:13:09.233073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:33.822 [2024-07-22 11:13:09.302445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:16:33.822 [2024-07-22 11:13:09.374778] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:16:33.822 Running I/O for 90 seconds... 01:16:33.822 [2024-07-22 11:13:23.224227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.822 [2024-07-22 11:13:23.224315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:16:33.822 [2024-07-22 11:13:23.224370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.822 [2024-07-22 11:13:23.224386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:16:33.822 [2024-07-22 11:13:23.224407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.822 [2024-07-22 11:13:23.224421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:16:33.822 [2024-07-22 11:13:23.224440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.822 [2024-07-22 11:13:23.224454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:16:33.822 [2024-07-22 11:13:23.224474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.822 [2024-07-22 11:13:23.224488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:16:33.822 [2024-07-22 11:13:23.224508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:9176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.822 [2024-07-22 11:13:23.224521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:16:33.822 [2024-07-22 11:13:23.224541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.822 [2024-07-22 11:13:23.224555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:16:33.822 [2024-07-22 11:13:23.224575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.822 [2024-07-22 11:13:23.224589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:16:33.822 [2024-07-22 11:13:23.224608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.822 [2024-07-22 11:13:23.224623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:16:33.822 [2024-07-22 11:13:23.224643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.822 [2024-07-22 11:13:23.224657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:16:33.822 [2024-07-22 11:13:23.224677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.822 [2024-07-22 11:13:23.224715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:16:33.822 [2024-07-22 11:13:23.224736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.822 [2024-07-22 11:13:23.224751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:16:33.822 [2024-07-22 11:13:23.224771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.822 [2024-07-22 11:13:23.224786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:16:33.822 [2024-07-22 11:13:23.224805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.822 [2024-07-22 11:13:23.224828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:16:33.822 [2024-07-22 11:13:23.224864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.822 [2024-07-22 11:13:23.224880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:16:33.822 [2024-07-22 11:13:23.224904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.822 [2024-07-22 11:13:23.224919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:16:33.822 [2024-07-22 11:13:23.224939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.822 [2024-07-22 11:13:23.224953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:16:33.822 [2024-07-22 11:13:23.224974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:8648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.822 [2024-07-22 11:13:23.224987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:16:33.822 [2024-07-22 11:13:23.225007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.822 [2024-07-22 11:13:23.225021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:16:33.822 [2024-07-22 11:13:23.225040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.822 [2024-07-22 11:13:23.225053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:16:33.822 [2024-07-22 11:13:23.225073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.822 [2024-07-22 11:13:23.225086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:16:33.822 [2024-07-22 11:13:23.225106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.822 [2024-07-22 11:13:23.225119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:16:33.822 [2024-07-22 11:13:23.225138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.822 [2024-07-22 11:13:23.225163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:16:33.822 [2024-07-22 11:13:23.225183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.823 [2024-07-22 11:13:23.225197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.225222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.823 [2024-07-22 11:13:23.225239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.225260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.823 [2024-07-22 11:13:23.225276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.225297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.823 [2024-07-22 11:13:23.225311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.225331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.823 [2024-07-22 11:13:23.225354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.225375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.823 [2024-07-22 11:13:23.225389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.225410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.823 [2024-07-22 11:13:23.225424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.225445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.823 [2024-07-22 11:13:23.225459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.225480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.823 [2024-07-22 11:13:23.225495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.225515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.823 [2024-07-22 11:13:23.225530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.225552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.823 [2024-07-22 11:13:23.225567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.225588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.823 [2024-07-22 11:13:23.225602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.225628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.823 [2024-07-22 11:13:23.225643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.225663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.823 [2024-07-22 11:13:23.225676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.225697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.823 [2024-07-22 11:13:23.225712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.225732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.823 [2024-07-22 11:13:23.225746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.225765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.823 [2024-07-22 11:13:23.225779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.225798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.823 [2024-07-22 11:13:23.225813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.225833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.823 [2024-07-22 11:13:23.225856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.225877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.823 [2024-07-22 11:13:23.225899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.225922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.823 [2024-07-22 11:13:23.225937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.225959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.823 [2024-07-22 11:13:23.225974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.225995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.823 [2024-07-22 11:13:23.226009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.226032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.823 [2024-07-22 11:13:23.226045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.226072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.823 [2024-07-22 11:13:23.226086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.226106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.823 [2024-07-22 11:13:23.226120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.226140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.823 [2024-07-22 11:13:23.226154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.226174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.823 [2024-07-22 11:13:23.226189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.226210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.823 [2024-07-22 11:13:23.226224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.226244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.823 [2024-07-22 11:13:23.226258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.226278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.823 [2024-07-22 11:13:23.226291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.226311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.823 [2024-07-22 11:13:23.226325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.226347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.823 [2024-07-22 11:13:23.226360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.226384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.823 [2024-07-22 11:13:23.226398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.226418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.823 [2024-07-22 11:13:23.226432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.226452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.823 [2024-07-22 11:13:23.226466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.226486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.823 [2024-07-22 11:13:23.226505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.226527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.823 [2024-07-22 11:13:23.226541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.226561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.823 [2024-07-22 11:13:23.226575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.226595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.823 [2024-07-22 11:13:23.226608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.226628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.823 [2024-07-22 11:13:23.226642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:16:33.823 [2024-07-22 11:13:23.226662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.823 [2024-07-22 11:13:23.226677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.226697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.824 [2024-07-22 11:13:23.226710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.226731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.824 [2024-07-22 11:13:23.226745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.226767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.824 [2024-07-22 11:13:23.226781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.226800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.824 [2024-07-22 11:13:23.226814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.226842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.824 [2024-07-22 11:13:23.226871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.226893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.824 [2024-07-22 11:13:23.226907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.226929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.824 [2024-07-22 11:13:23.226951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.226973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.824 [2024-07-22 11:13:23.226998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.227018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.824 [2024-07-22 11:13:23.227032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.227052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.824 [2024-07-22 11:13:23.227066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.227086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.824 [2024-07-22 11:13:23.227100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.227120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.824 [2024-07-22 11:13:23.227134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.227154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.824 [2024-07-22 11:13:23.227168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.227187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.824 [2024-07-22 11:13:23.227201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.227227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.824 [2024-07-22 11:13:23.227240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.227261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.824 [2024-07-22 11:13:23.227275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.227294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.824 [2024-07-22 11:13:23.227308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.227328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.824 [2024-07-22 11:13:23.227342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.227362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.824 [2024-07-22 11:13:23.227376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.227401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.824 [2024-07-22 11:13:23.227416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.227436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.824 [2024-07-22 11:13:23.227450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.227470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.824 [2024-07-22 11:13:23.227484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.227504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.824 [2024-07-22 11:13:23.227518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.227542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.824 [2024-07-22 11:13:23.227558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.227578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.824 [2024-07-22 11:13:23.227592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.227613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.824 [2024-07-22 11:13:23.227627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.227647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.824 [2024-07-22 11:13:23.227661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.227681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.824 [2024-07-22 11:13:23.227695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.227714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.824 [2024-07-22 11:13:23.227728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.227748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.824 [2024-07-22 11:13:23.227762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.227784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.824 [2024-07-22 11:13:23.227798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.227831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.824 [2024-07-22 11:13:23.227855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.227875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.824 [2024-07-22 11:13:23.227889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.227909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.824 [2024-07-22 11:13:23.227923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.227943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.824 [2024-07-22 11:13:23.227957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.227976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.824 [2024-07-22 11:13:23.227990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.228010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.824 [2024-07-22 11:13:23.228024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.228044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.824 [2024-07-22 11:13:23.228058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.228079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.824 [2024-07-22 11:13:23.228094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.228113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.824 [2024-07-22 11:13:23.228129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.228149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.824 [2024-07-22 11:13:23.228164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:16:33.824 [2024-07-22 11:13:23.228184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.824 [2024-07-22 11:13:23.228198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:23.228218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.825 [2024-07-22 11:13:23.228232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:23.228252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.825 [2024-07-22 11:13:23.228271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:23.228291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.825 [2024-07-22 11:13:23.228305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:23.228325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.825 [2024-07-22 11:13:23.228339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:23.229052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.825 [2024-07-22 11:13:23.229084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:23.229119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.825 [2024-07-22 11:13:23.229137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:23.229167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.825 [2024-07-22 11:13:23.229186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:23.229216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.825 [2024-07-22 11:13:23.229234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:23.229264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.825 [2024-07-22 11:13:23.229283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:23.229314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.825 [2024-07-22 11:13:23.229332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:23.229372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.825 [2024-07-22 11:13:23.229386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:23.229414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.825 [2024-07-22 11:13:23.229429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:23.229471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.825 [2024-07-22 11:13:23.229486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:23.229513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.825 [2024-07-22 11:13:23.229532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:23.229619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.825 [2024-07-22 11:13:23.229634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:23.229662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.825 [2024-07-22 11:13:23.229676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:23.229704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.825 [2024-07-22 11:13:23.229718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:23.229747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.825 [2024-07-22 11:13:23.229761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:23.229789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.825 [2024-07-22 11:13:23.229803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:23.229831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.825 [2024-07-22 11:13:23.229860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:23.229896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.825 [2024-07-22 11:13:23.229911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:36.417473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:28000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.825 [2024-07-22 11:13:36.417562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:36.417613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:28016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.825 [2024-07-22 11:13:36.417628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:36.417647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.825 [2024-07-22 11:13:36.417660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:36.417678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:27600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.825 [2024-07-22 11:13:36.417691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:36.417709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:27632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.825 [2024-07-22 11:13:36.417721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:36.417765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:27672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.825 [2024-07-22 11:13:36.417778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:36.417796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.825 [2024-07-22 11:13:36.417809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:36.418026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:28048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.825 [2024-07-22 11:13:36.418044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:36.418064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:28064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.825 [2024-07-22 11:13:36.418077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:36.418096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:28080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.825 [2024-07-22 11:13:36.418109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:36.418128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:28096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.825 [2024-07-22 11:13:36.418141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:36.418159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:28112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.825 [2024-07-22 11:13:36.418171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:36.418189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:28128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.825 [2024-07-22 11:13:36.418202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:36.418220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:28144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.825 [2024-07-22 11:13:36.418232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:36.418250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.825 [2024-07-22 11:13:36.418262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:36.418280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:27728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.825 [2024-07-22 11:13:36.418292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:36.418309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:27760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.825 [2024-07-22 11:13:36.418322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:36.418344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.825 [2024-07-22 11:13:36.418366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:36.418408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:28160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.825 [2024-07-22 11:13:36.418423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:36.418442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:28176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.825 [2024-07-22 11:13:36.418455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:16:33.825 [2024-07-22 11:13:36.418474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:28192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.825 [2024-07-22 11:13:36.418487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:16:33.826 [2024-07-22 11:13:36.418505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:28208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.826 [2024-07-22 11:13:36.418519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:16:33.826 [2024-07-22 11:13:36.418537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:28224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.826 [2024-07-22 11:13:36.418551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:16:33.826 [2024-07-22 11:13:36.418570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:28240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.826 [2024-07-22 11:13:36.418583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:16:33.826 [2024-07-22 11:13:36.418602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:28256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.826 [2024-07-22 11:13:36.418614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:16:33.826 [2024-07-22 11:13:36.418632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.826 [2024-07-22 11:13:36.418646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:16:33.826 [2024-07-22 11:13:36.418664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:28288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.826 [2024-07-22 11:13:36.418676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:16:33.826 [2024-07-22 11:13:36.418694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:28304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.826 [2024-07-22 11:13:36.418707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:16:33.826 [2024-07-22 11:13:36.418725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:28320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.826 [2024-07-22 11:13:36.418738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:16:33.826 [2024-07-22 11:13:36.418757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:28336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.826 [2024-07-22 11:13:36.418781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:16:33.826 [2024-07-22 11:13:36.418801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:27824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.826 [2024-07-22 11:13:36.418814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:16:33.826 [2024-07-22 11:13:36.418833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:27856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.826 [2024-07-22 11:13:36.418859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:16:33.826 [2024-07-22 11:13:36.418878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.826 [2024-07-22 11:13:36.418891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:16:33.826 [2024-07-22 11:13:36.419768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.826 [2024-07-22 11:13:36.419796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:16:33.826 [2024-07-22 11:13:36.419822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.826 [2024-07-22 11:13:36.419835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:16:33.826 [2024-07-22 11:13:36.419868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:27784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.826 [2024-07-22 11:13:36.419882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:16:33.826 [2024-07-22 11:13:36.419903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:27816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.826 [2024-07-22 11:13:36.419917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:16:33.826 [2024-07-22 11:13:36.419935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.826 [2024-07-22 11:13:36.419948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:16:33.826 [2024-07-22 11:13:36.419966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:28376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.826 [2024-07-22 11:13:36.419979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:16:33.826 [2024-07-22 11:13:36.419997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:28392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.826 [2024-07-22 11:13:36.420012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:16:33.826 [2024-07-22 11:13:36.420030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.826 [2024-07-22 11:13:36.420043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:16:33.826 [2024-07-22 11:13:36.420061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:28424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.826 [2024-07-22 11:13:36.420074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:16:33.826 [2024-07-22 11:13:36.420100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:28440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.826 [2024-07-22 11:13:36.420113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:16:33.826 [2024-07-22 11:13:36.420131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:27832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.826 [2024-07-22 11:13:36.420144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:16:33.826 [2024-07-22 11:13:36.420163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:27920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:16:33.826 [2024-07-22 11:13:36.420177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:16:33.826 [2024-07-22 11:13:36.420196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:28456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.826 [2024-07-22 11:13:36.420209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:16:33.826 [2024-07-22 11:13:36.420241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:28472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.826 [2024-07-22 11:13:36.420254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:16:33.826 [2024-07-22 11:13:36.420273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:28488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.826 [2024-07-22 11:13:36.420286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:16:33.826 [2024-07-22 11:13:36.420304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:28504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.826 [2024-07-22 11:13:36.420318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:16:33.826 [2024-07-22 11:13:36.420336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.826 [2024-07-22 11:13:36.420349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:16:33.826 [2024-07-22 11:13:36.420368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:28536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:16:33.826 [2024-07-22 11:13:36.420381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:16:33.826 Received shutdown signal, test time was about 27.935087 seconds 01:16:33.826 01:16:33.826 Latency(us) 01:16:33.826 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:16:33.826 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:16:33.826 Verification LBA range: start 0x0 length 0x4000 01:16:33.826 Nvme0n1 : 27.93 9306.97 36.36 0.00 0.00 13729.67 98.70 3018551.31 01:16:33.826 =================================================================================================================== 01:16:33.826 Total : 9306.97 36.36 0.00 0.00 13729.67 98.70 3018551.31 01:16:33.826 11:13:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:16:34.084 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 01:16:34.084 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:16:34.084 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 01:16:34.084 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 01:16:34.084 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 01:16:34.342 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:16:34.342 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 01:16:34.342 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 01:16:34.342 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:16:34.342 rmmod nvme_tcp 01:16:34.342 rmmod nvme_fabrics 01:16:34.342 rmmod nvme_keyring 01:16:34.342 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:16:34.342 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 01:16:34.342 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 01:16:34.342 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 91575 ']' 01:16:34.342 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 91575 01:16:34.342 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 91575 ']' 01:16:34.342 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 91575 01:16:34.342 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 01:16:34.600 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:16:34.600 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91575 01:16:34.600 killing process with pid 91575 01:16:34.600 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:16:34.600 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:16:34.600 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91575' 01:16:34.600 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 91575 01:16:34.600 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 91575 01:16:34.600 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:16:34.600 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:16:34.600 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:16:34.600 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:16:34.600 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 01:16:34.600 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:16:34.600 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:16:34.600 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:16:34.859 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:16:34.859 ************************************ 01:16:34.859 END TEST nvmf_host_multipath_status 01:16:34.859 ************************************ 01:16:34.859 01:16:34.859 real 0m33.611s 01:16:34.859 user 1m42.453s 01:16:34.859 sys 0m12.690s 01:16:34.859 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 01:16:34.859 11:13:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:16:34.859 11:13:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:16:34.859 11:13:39 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 01:16:34.859 11:13:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:16:34.859 11:13:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:16:34.859 11:13:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:16:34.859 ************************************ 01:16:34.859 START TEST nvmf_discovery_remove_ifc 01:16:34.859 ************************************ 01:16:34.859 11:13:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 01:16:34.859 * Looking for test storage... 01:16:34.859 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:16:34.859 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:16:34.859 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 01:16:34.859 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:16:34.859 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:16:34.859 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:16:34.859 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:16:34.859 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:16:34.859 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:16:34.859 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:16:34.859 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:16:34.859 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:16:34.859 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:16:35.117 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:16:35.118 Cannot find device "nvmf_tgt_br" 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:16:35.118 Cannot find device "nvmf_tgt_br2" 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:16:35.118 Cannot find device "nvmf_tgt_br" 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:16:35.118 Cannot find device "nvmf_tgt_br2" 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:16:35.118 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:16:35.118 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:16:35.118 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:16:35.376 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:16:35.376 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:16:35.376 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:16:35.376 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:16:35.376 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:16:35.376 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:16:35.376 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:16:35.376 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:16:35.376 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:16:35.376 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:16:35.376 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:16:35.376 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:16:35.376 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:16:35.376 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:16:35.376 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:16:35.376 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:16:35.376 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:16:35.376 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:16:35.376 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:16:35.635 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:16:35.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:16:35.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 01:16:35.635 01:16:35.635 --- 10.0.0.2 ping statistics --- 01:16:35.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:16:35.635 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 01:16:35.635 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:16:35.635 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:16:35.635 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 01:16:35.635 01:16:35.635 --- 10.0.0.3 ping statistics --- 01:16:35.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:16:35.635 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 01:16:35.635 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:16:35.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:16:35.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 01:16:35.635 01:16:35.635 --- 10.0.0.1 ping statistics --- 01:16:35.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:16:35.635 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 01:16:35.635 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:16:35.635 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 01:16:35.635 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:16:35.635 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:16:35.635 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:16:35.635 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:16:35.635 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:16:35.635 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:16:35.635 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:16:35.635 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 01:16:35.635 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:16:35.635 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 01:16:35.635 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:16:35.635 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=92354 01:16:35.635 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:16:35.635 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 92354 01:16:35.635 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 92354 ']' 01:16:35.635 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:16:35.635 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 01:16:35.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:16:35.635 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:16:35.635 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 01:16:35.635 11:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:16:35.635 [2024-07-22 11:13:40.709503] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:16:35.635 [2024-07-22 11:13:40.709564] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:16:35.894 [2024-07-22 11:13:40.853489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:35.894 [2024-07-22 11:13:40.895215] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:16:35.894 [2024-07-22 11:13:40.895258] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:16:35.894 [2024-07-22 11:13:40.895268] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:16:35.894 [2024-07-22 11:13:40.895276] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:16:35.894 [2024-07-22 11:13:40.895283] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:16:35.894 [2024-07-22 11:13:40.895308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:16:35.894 [2024-07-22 11:13:40.936046] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:16:36.459 11:13:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:16:36.459 11:13:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 01:16:36.459 11:13:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:16:36.459 11:13:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 01:16:36.459 11:13:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:16:36.459 11:13:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:16:36.459 11:13:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 01:16:36.459 11:13:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:36.459 11:13:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:16:36.459 [2024-07-22 11:13:41.603881] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:16:36.459 [2024-07-22 11:13:41.611977] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 01:16:36.459 null0 01:16:36.459 [2024-07-22 11:13:41.643872] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:16:36.459 11:13:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:36.459 11:13:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=92386 01:16:36.459 11:13:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 01:16:36.459 11:13:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 92386 /tmp/host.sock 01:16:36.459 11:13:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 92386 ']' 01:16:36.459 11:13:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 01:16:36.459 11:13:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 01:16:36.718 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 01:16:36.718 11:13:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 01:16:36.718 11:13:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 01:16:36.718 11:13:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:16:36.718 [2024-07-22 11:13:41.714161] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:16:36.718 [2024-07-22 11:13:41.714224] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92386 ] 01:16:36.718 [2024-07-22 11:13:41.857051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:36.718 [2024-07-22 11:13:41.898774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:16:37.651 11:13:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:16:37.651 11:13:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 01:16:37.652 11:13:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:16:37.652 11:13:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 01:16:37.652 11:13:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:37.652 11:13:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:16:37.652 11:13:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:37.652 11:13:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 01:16:37.652 11:13:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:37.652 11:13:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:16:37.652 [2024-07-22 11:13:42.598600] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:16:37.652 11:13:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:37.652 11:13:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 01:16:37.652 11:13:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:37.652 11:13:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:16:38.594 [2024-07-22 11:13:43.633188] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 01:16:38.594 [2024-07-22 11:13:43.633226] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 01:16:38.594 [2024-07-22 11:13:43.633240] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:16:38.594 [2024-07-22 11:13:43.639217] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 01:16:38.594 [2024-07-22 11:13:43.696029] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 01:16:38.594 [2024-07-22 11:13:43.696089] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 01:16:38.594 [2024-07-22 11:13:43.696111] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 01:16:38.594 [2024-07-22 11:13:43.696128] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 01:16:38.594 [2024-07-22 11:13:43.696151] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 01:16:38.594 11:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:38.594 11:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 01:16:38.594 11:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:16:38.594 [2024-07-22 11:13:43.701796] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1ea1f20 was disconnected and freed. delete nvme_qpair. 01:16:38.594 11:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:16:38.594 11:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:38.594 11:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:16:38.594 11:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:16:38.594 11:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:16:38.594 11:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:16:38.594 11:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:38.594 11:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 01:16:38.594 11:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 01:16:38.594 11:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 01:16:38.594 11:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 01:16:38.594 11:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:16:38.594 11:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:16:38.594 11:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:38.594 11:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:16:38.594 11:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:16:38.594 11:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:16:38.594 11:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:16:38.852 11:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:38.852 11:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:16:38.852 11:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:16:39.787 11:13:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:16:39.787 11:13:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:16:39.787 11:13:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:16:39.787 11:13:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:39.787 11:13:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:16:39.787 11:13:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:16:39.787 11:13:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:16:39.787 11:13:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:39.787 11:13:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:16:39.787 11:13:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:16:40.722 11:13:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:16:40.722 11:13:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:16:40.722 11:13:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:16:40.722 11:13:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:40.722 11:13:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:16:40.722 11:13:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:16:40.722 11:13:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:16:40.722 11:13:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:40.722 11:13:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:16:40.722 11:13:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:16:42.100 11:13:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:16:42.100 11:13:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:16:42.100 11:13:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:42.100 11:13:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:16:42.100 11:13:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:16:42.100 11:13:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:16:42.100 11:13:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:16:42.100 11:13:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:42.100 11:13:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:16:42.100 11:13:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:16:43.037 11:13:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:16:43.037 11:13:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:16:43.037 11:13:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:43.037 11:13:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:16:43.037 11:13:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:16:43.037 11:13:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:16:43.037 11:13:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:16:43.037 11:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:43.037 11:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:16:43.037 11:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:16:43.975 11:13:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:16:43.975 11:13:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:16:43.975 11:13:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:16:43.975 11:13:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:16:43.975 11:13:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:16:43.975 11:13:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:43.975 11:13:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:16:43.975 11:13:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:43.975 11:13:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:16:43.975 11:13:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:16:43.975 [2024-07-22 11:13:49.114995] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 01:16:43.975 [2024-07-22 11:13:49.115185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:16:43.975 [2024-07-22 11:13:49.115289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:16:43.975 [2024-07-22 11:13:49.115340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:16:43.975 [2024-07-22 11:13:49.115423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:16:43.975 [2024-07-22 11:13:49.115473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:16:43.975 [2024-07-22 11:13:49.115519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:16:43.975 [2024-07-22 11:13:49.115603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:16:43.975 [2024-07-22 11:13:49.115778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:16:43.975 [2024-07-22 11:13:49.115824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 01:16:43.975 [2024-07-22 11:13:49.115883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:16:43.976 [2024-07-22 11:13:49.115975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e668b0 is same with the state(5) to be set 01:16:43.976 [2024-07-22 11:13:49.124974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e668b0 (9): Bad file descriptor 01:16:43.976 [2024-07-22 11:13:49.134976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:16:45.020 11:13:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:16:45.020 11:13:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:16:45.020 11:13:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:16:45.020 11:13:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:16:45.020 11:13:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:45.020 11:13:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:16:45.021 11:13:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:16:45.021 [2024-07-22 11:13:50.189940] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 110 01:16:45.021 [2024-07-22 11:13:50.190459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e668b0 with addr=10.0.0.2, port=4420 01:16:45.021 [2024-07-22 11:13:50.190984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e668b0 is same with the state(5) to be set 01:16:45.021 [2024-07-22 11:13:50.191294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e668b0 (9): Bad file descriptor 01:16:45.021 [2024-07-22 11:13:50.192338] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 01:16:45.021 [2024-07-22 11:13:50.192404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:16:45.021 [2024-07-22 11:13:50.192435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:16:45.021 [2024-07-22 11:13:50.192467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:16:45.021 [2024-07-22 11:13:50.192540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:16:45.021 [2024-07-22 11:13:50.192573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:16:45.021 11:13:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:45.021 11:13:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:16:45.021 11:13:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:16:46.393 [2024-07-22 11:13:51.191059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:16:46.393 [2024-07-22 11:13:51.191128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:16:46.393 [2024-07-22 11:13:51.191139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:16:46.393 [2024-07-22 11:13:51.191150] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 01:16:46.393 [2024-07-22 11:13:51.191170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:16:46.393 [2024-07-22 11:13:51.191197] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 01:16:46.393 [2024-07-22 11:13:51.191255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:16:46.393 [2024-07-22 11:13:51.191268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:16:46.393 [2024-07-22 11:13:51.191281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:16:46.393 [2024-07-22 11:13:51.191291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:16:46.393 [2024-07-22 11:13:51.191301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:16:46.393 [2024-07-22 11:13:51.191310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:16:46.393 [2024-07-22 11:13:51.191320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:16:46.393 [2024-07-22 11:13:51.191328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:16:46.393 [2024-07-22 11:13:51.191338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 01:16:46.393 [2024-07-22 11:13:51.191347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:16:46.393 [2024-07-22 11:13:51.191356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 01:16:46.393 [2024-07-22 11:13:51.191434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e65ea0 (9): Bad file descriptor 01:16:46.393 [2024-07-22 11:13:51.192407] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 01:16:46.393 [2024-07-22 11:13:51.192419] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 01:16:46.393 11:13:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:16:46.393 11:13:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:16:46.393 11:13:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:46.393 11:13:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:16:46.393 11:13:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:16:46.393 11:13:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:16:46.393 11:13:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:16:46.393 11:13:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:46.393 11:13:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 01:16:46.393 11:13:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:16:46.393 11:13:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:16:46.393 11:13:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 01:16:46.393 11:13:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:16:46.393 11:13:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:16:46.393 11:13:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:46.393 11:13:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:16:46.393 11:13:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:16:46.393 11:13:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:16:46.393 11:13:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:16:46.393 11:13:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:46.393 11:13:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 01:16:46.393 11:13:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:16:47.327 11:13:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:16:47.327 11:13:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:16:47.327 11:13:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:47.327 11:13:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:16:47.327 11:13:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:16:47.327 11:13:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:16:47.327 11:13:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:16:47.327 11:13:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:47.327 11:13:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 01:16:47.327 11:13:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:16:48.274 [2024-07-22 11:13:53.192015] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 01:16:48.274 [2024-07-22 11:13:53.192055] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 01:16:48.274 [2024-07-22 11:13:53.192070] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:16:48.274 [2024-07-22 11:13:53.198036] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 01:16:48.274 [2024-07-22 11:13:53.254486] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 01:16:48.274 [2024-07-22 11:13:53.254561] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 01:16:48.274 [2024-07-22 11:13:53.254584] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 01:16:48.274 [2024-07-22 11:13:53.254605] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 01:16:48.274 [2024-07-22 11:13:53.254617] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 01:16:48.274 [2024-07-22 11:13:53.260712] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1e55d40 was disconnected and freed. delete nvme_qpair. 01:16:48.274 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:16:48.274 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:16:48.274 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:16:48.274 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:16:48.274 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:16:48.274 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:48.274 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:16:48.274 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:48.275 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 01:16:48.275 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 01:16:48.275 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 92386 01:16:48.275 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 92386 ']' 01:16:48.275 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 92386 01:16:48.275 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 01:16:48.275 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:16:48.275 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92386 01:16:48.533 killing process with pid 92386 01:16:48.533 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:16:48.533 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:16:48.533 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92386' 01:16:48.533 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 92386 01:16:48.533 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 92386 01:16:48.533 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 01:16:48.533 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 01:16:48.533 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 01:16:48.533 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:16:48.533 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 01:16:48.533 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 01:16:48.533 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:16:48.533 rmmod nvme_tcp 01:16:48.533 rmmod nvme_fabrics 01:16:48.807 rmmod nvme_keyring 01:16:48.807 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:16:48.807 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 01:16:48.807 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 01:16:48.807 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 92354 ']' 01:16:48.807 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 92354 01:16:48.807 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 92354 ']' 01:16:48.807 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 92354 01:16:48.807 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 01:16:48.807 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:16:48.807 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92354 01:16:48.807 killing process with pid 92354 01:16:48.807 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:16:48.807 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:16:48.807 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92354' 01:16:48.807 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 92354 01:16:48.807 11:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 92354 01:16:49.064 11:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:16:49.064 11:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:16:49.064 11:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:16:49.064 11:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:16:49.064 11:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 01:16:49.064 11:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:16:49.064 11:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:16:49.064 11:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:16:49.064 11:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:16:49.064 01:16:49.064 real 0m14.216s 01:16:49.064 user 0m23.354s 01:16:49.064 sys 0m3.255s 01:16:49.064 11:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 01:16:49.064 11:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:16:49.064 ************************************ 01:16:49.064 END TEST nvmf_discovery_remove_ifc 01:16:49.064 ************************************ 01:16:49.064 11:13:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:16:49.064 11:13:54 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 01:16:49.064 11:13:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:16:49.064 11:13:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:16:49.064 11:13:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:16:49.064 ************************************ 01:16:49.064 START TEST nvmf_identify_kernel_target 01:16:49.064 ************************************ 01:16:49.064 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 01:16:49.322 * Looking for test storage... 01:16:49.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:16:49.322 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:16:49.322 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 01:16:49.322 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:16:49.322 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:16:49.322 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:16:49.322 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:16:49.322 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:16:49.322 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:16:49.322 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:16:49.322 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:16:49.322 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:16:49.322 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:16:49.322 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:16:49.322 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:16:49.322 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:16:49.322 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:16:49.322 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:16:49.322 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:16:49.322 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:16:49.323 Cannot find device "nvmf_tgt_br" 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:16:49.323 Cannot find device "nvmf_tgt_br2" 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:16:49.323 Cannot find device "nvmf_tgt_br" 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:16:49.323 Cannot find device "nvmf_tgt_br2" 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:16:49.323 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:16:49.581 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:16:49.581 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:16:49.581 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 01:16:49.581 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:16:49.581 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:16:49.581 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 01:16:49.581 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:16:49.581 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:16:49.581 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:16:49.581 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:16:49.581 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:16:49.581 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:16:49.581 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:16:49.581 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:16:49.581 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:16:49.581 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:16:49.581 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:16:49.581 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:16:49.581 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:16:49.581 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:16:49.581 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:16:49.581 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:16:49.581 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:16:49.581 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:16:49.581 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:16:49.581 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:16:49.581 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:16:49.581 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:16:49.581 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:16:49.581 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:16:49.581 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:16:49.581 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 01:16:49.581 01:16:49.581 --- 10.0.0.2 ping statistics --- 01:16:49.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:16:49.581 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 01:16:49.581 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:16:49.581 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:16:49.581 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 01:16:49.581 01:16:49.581 --- 10.0.0.3 ping statistics --- 01:16:49.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:16:49.581 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 01:16:49.581 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:16:49.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:16:49.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 01:16:49.582 01:16:49.582 --- 10.0.0.1 ping statistics --- 01:16:49.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:16:49.582 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 01:16:49.582 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:16:49.582 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 01:16:49.582 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:16:49.582 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:16:49.582 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:16:49.582 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:16:49.582 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:16:49.582 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:16:49.582 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:16:49.839 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 01:16:49.839 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 01:16:49.839 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 01:16:49.839 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 01:16:49.839 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 01:16:49.839 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:16:49.839 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:16:49.839 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:16:49.839 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:16:49.839 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:16:49.839 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:16:49.839 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:16:49.839 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 01:16:49.839 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 01:16:49.839 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 01:16:49.839 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 01:16:49.839 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:16:49.839 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:16:49.839 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 01:16:49.839 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 01:16:49.839 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 01:16:49.839 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 01:16:49.839 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 01:16:49.839 11:13:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:16:50.404 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:16:50.404 Waiting for block devices as requested 01:16:50.404 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:16:50.405 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:16:50.663 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:16:50.663 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 01:16:50.663 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 01:16:50.663 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 01:16:50.663 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:16:50.663 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:16:50.663 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 01:16:50.663 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 01:16:50.663 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 01:16:50.663 No valid GPT data, bailing 01:16:50.663 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:16:50.663 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 01:16:50.663 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 01:16:50.663 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 01:16:50.663 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:16:50.663 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 01:16:50.663 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 01:16:50.663 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 01:16:50.663 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 01:16:50.663 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:16:50.663 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 01:16:50.663 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 01:16:50.663 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 01:16:50.663 No valid GPT data, bailing 01:16:50.663 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 01:16:50.663 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 01:16:50.663 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 01:16:50.663 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 01:16:50.663 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:16:50.664 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 01:16:50.664 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 01:16:50.664 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 01:16:50.664 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 01:16:50.664 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:16:50.664 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 01:16:50.664 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 01:16:50.664 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 01:16:50.929 No valid GPT data, bailing 01:16:50.929 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 01:16:50.929 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 01:16:50.929 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 01:16:50.929 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 01:16:50.929 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:16:50.929 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 01:16:50.929 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 01:16:50.929 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 01:16:50.929 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:16:50.929 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:16:50.929 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 01:16:50.929 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 01:16:50.929 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 01:16:50.929 No valid GPT data, bailing 01:16:50.929 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 01:16:50.929 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 01:16:50.929 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 01:16:50.929 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 01:16:50.929 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 01:16:50.929 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:16:50.929 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:16:50.929 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 01:16:50.929 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 01:16:50.929 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 01:16:50.929 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 01:16:50.929 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 01:16:50.929 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 01:16:50.929 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 01:16:50.929 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 01:16:50.929 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 01:16:50.929 11:13:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 01:16:50.929 11:13:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid=7758934d-ca6b-403e-9e5d-3518ecb16acb -a 10.0.0.1 -t tcp -s 4420 01:16:50.929 01:16:50.929 Discovery Log Number of Records 2, Generation counter 2 01:16:50.929 =====Discovery Log Entry 0====== 01:16:50.929 trtype: tcp 01:16:50.929 adrfam: ipv4 01:16:50.929 subtype: current discovery subsystem 01:16:50.929 treq: not specified, sq flow control disable supported 01:16:50.929 portid: 1 01:16:50.929 trsvcid: 4420 01:16:50.929 subnqn: nqn.2014-08.org.nvmexpress.discovery 01:16:50.929 traddr: 10.0.0.1 01:16:50.929 eflags: none 01:16:50.929 sectype: none 01:16:50.929 =====Discovery Log Entry 1====== 01:16:50.929 trtype: tcp 01:16:50.929 adrfam: ipv4 01:16:50.929 subtype: nvme subsystem 01:16:50.929 treq: not specified, sq flow control disable supported 01:16:50.929 portid: 1 01:16:50.929 trsvcid: 4420 01:16:50.929 subnqn: nqn.2016-06.io.spdk:testnqn 01:16:50.929 traddr: 10.0.0.1 01:16:50.929 eflags: none 01:16:50.929 sectype: none 01:16:50.929 11:13:56 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 01:16:50.929 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 01:16:51.187 ===================================================== 01:16:51.187 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 01:16:51.187 ===================================================== 01:16:51.187 Controller Capabilities/Features 01:16:51.187 ================================ 01:16:51.187 Vendor ID: 0000 01:16:51.187 Subsystem Vendor ID: 0000 01:16:51.187 Serial Number: 23a099699e80bc605616 01:16:51.187 Model Number: Linux 01:16:51.187 Firmware Version: 6.7.0-68 01:16:51.187 Recommended Arb Burst: 0 01:16:51.187 IEEE OUI Identifier: 00 00 00 01:16:51.187 Multi-path I/O 01:16:51.187 May have multiple subsystem ports: No 01:16:51.187 May have multiple controllers: No 01:16:51.187 Associated with SR-IOV VF: No 01:16:51.187 Max Data Transfer Size: Unlimited 01:16:51.187 Max Number of Namespaces: 0 01:16:51.187 Max Number of I/O Queues: 1024 01:16:51.187 NVMe Specification Version (VS): 1.3 01:16:51.187 NVMe Specification Version (Identify): 1.3 01:16:51.187 Maximum Queue Entries: 1024 01:16:51.187 Contiguous Queues Required: No 01:16:51.187 Arbitration Mechanisms Supported 01:16:51.187 Weighted Round Robin: Not Supported 01:16:51.187 Vendor Specific: Not Supported 01:16:51.187 Reset Timeout: 7500 ms 01:16:51.187 Doorbell Stride: 4 bytes 01:16:51.187 NVM Subsystem Reset: Not Supported 01:16:51.187 Command Sets Supported 01:16:51.187 NVM Command Set: Supported 01:16:51.187 Boot Partition: Not Supported 01:16:51.187 Memory Page Size Minimum: 4096 bytes 01:16:51.187 Memory Page Size Maximum: 4096 bytes 01:16:51.187 Persistent Memory Region: Not Supported 01:16:51.187 Optional Asynchronous Events Supported 01:16:51.187 Namespace Attribute Notices: Not Supported 01:16:51.187 Firmware Activation Notices: Not Supported 01:16:51.187 ANA Change Notices: Not Supported 01:16:51.187 PLE Aggregate Log Change Notices: Not Supported 01:16:51.187 LBA Status Info Alert Notices: Not Supported 01:16:51.187 EGE Aggregate Log Change Notices: Not Supported 01:16:51.187 Normal NVM Subsystem Shutdown event: Not Supported 01:16:51.187 Zone Descriptor Change Notices: Not Supported 01:16:51.187 Discovery Log Change Notices: Supported 01:16:51.187 Controller Attributes 01:16:51.187 128-bit Host Identifier: Not Supported 01:16:51.187 Non-Operational Permissive Mode: Not Supported 01:16:51.187 NVM Sets: Not Supported 01:16:51.187 Read Recovery Levels: Not Supported 01:16:51.187 Endurance Groups: Not Supported 01:16:51.187 Predictable Latency Mode: Not Supported 01:16:51.187 Traffic Based Keep ALive: Not Supported 01:16:51.187 Namespace Granularity: Not Supported 01:16:51.187 SQ Associations: Not Supported 01:16:51.187 UUID List: Not Supported 01:16:51.187 Multi-Domain Subsystem: Not Supported 01:16:51.187 Fixed Capacity Management: Not Supported 01:16:51.187 Variable Capacity Management: Not Supported 01:16:51.187 Delete Endurance Group: Not Supported 01:16:51.187 Delete NVM Set: Not Supported 01:16:51.187 Extended LBA Formats Supported: Not Supported 01:16:51.187 Flexible Data Placement Supported: Not Supported 01:16:51.187 01:16:51.187 Controller Memory Buffer Support 01:16:51.187 ================================ 01:16:51.187 Supported: No 01:16:51.187 01:16:51.187 Persistent Memory Region Support 01:16:51.187 ================================ 01:16:51.187 Supported: No 01:16:51.187 01:16:51.187 Admin Command Set Attributes 01:16:51.187 ============================ 01:16:51.187 Security Send/Receive: Not Supported 01:16:51.187 Format NVM: Not Supported 01:16:51.187 Firmware Activate/Download: Not Supported 01:16:51.187 Namespace Management: Not Supported 01:16:51.187 Device Self-Test: Not Supported 01:16:51.187 Directives: Not Supported 01:16:51.187 NVMe-MI: Not Supported 01:16:51.187 Virtualization Management: Not Supported 01:16:51.188 Doorbell Buffer Config: Not Supported 01:16:51.188 Get LBA Status Capability: Not Supported 01:16:51.188 Command & Feature Lockdown Capability: Not Supported 01:16:51.188 Abort Command Limit: 1 01:16:51.188 Async Event Request Limit: 1 01:16:51.188 Number of Firmware Slots: N/A 01:16:51.188 Firmware Slot 1 Read-Only: N/A 01:16:51.188 Firmware Activation Without Reset: N/A 01:16:51.188 Multiple Update Detection Support: N/A 01:16:51.188 Firmware Update Granularity: No Information Provided 01:16:51.188 Per-Namespace SMART Log: No 01:16:51.188 Asymmetric Namespace Access Log Page: Not Supported 01:16:51.188 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 01:16:51.188 Command Effects Log Page: Not Supported 01:16:51.188 Get Log Page Extended Data: Supported 01:16:51.188 Telemetry Log Pages: Not Supported 01:16:51.188 Persistent Event Log Pages: Not Supported 01:16:51.188 Supported Log Pages Log Page: May Support 01:16:51.188 Commands Supported & Effects Log Page: Not Supported 01:16:51.188 Feature Identifiers & Effects Log Page:May Support 01:16:51.188 NVMe-MI Commands & Effects Log Page: May Support 01:16:51.188 Data Area 4 for Telemetry Log: Not Supported 01:16:51.188 Error Log Page Entries Supported: 1 01:16:51.188 Keep Alive: Not Supported 01:16:51.188 01:16:51.188 NVM Command Set Attributes 01:16:51.188 ========================== 01:16:51.188 Submission Queue Entry Size 01:16:51.188 Max: 1 01:16:51.188 Min: 1 01:16:51.188 Completion Queue Entry Size 01:16:51.188 Max: 1 01:16:51.188 Min: 1 01:16:51.188 Number of Namespaces: 0 01:16:51.188 Compare Command: Not Supported 01:16:51.188 Write Uncorrectable Command: Not Supported 01:16:51.188 Dataset Management Command: Not Supported 01:16:51.188 Write Zeroes Command: Not Supported 01:16:51.188 Set Features Save Field: Not Supported 01:16:51.188 Reservations: Not Supported 01:16:51.188 Timestamp: Not Supported 01:16:51.188 Copy: Not Supported 01:16:51.188 Volatile Write Cache: Not Present 01:16:51.188 Atomic Write Unit (Normal): 1 01:16:51.188 Atomic Write Unit (PFail): 1 01:16:51.188 Atomic Compare & Write Unit: 1 01:16:51.188 Fused Compare & Write: Not Supported 01:16:51.188 Scatter-Gather List 01:16:51.188 SGL Command Set: Supported 01:16:51.188 SGL Keyed: Not Supported 01:16:51.188 SGL Bit Bucket Descriptor: Not Supported 01:16:51.188 SGL Metadata Pointer: Not Supported 01:16:51.188 Oversized SGL: Not Supported 01:16:51.188 SGL Metadata Address: Not Supported 01:16:51.188 SGL Offset: Supported 01:16:51.188 Transport SGL Data Block: Not Supported 01:16:51.188 Replay Protected Memory Block: Not Supported 01:16:51.188 01:16:51.188 Firmware Slot Information 01:16:51.188 ========================= 01:16:51.188 Active slot: 0 01:16:51.188 01:16:51.188 01:16:51.188 Error Log 01:16:51.188 ========= 01:16:51.188 01:16:51.188 Active Namespaces 01:16:51.188 ================= 01:16:51.188 Discovery Log Page 01:16:51.188 ================== 01:16:51.188 Generation Counter: 2 01:16:51.188 Number of Records: 2 01:16:51.188 Record Format: 0 01:16:51.188 01:16:51.188 Discovery Log Entry 0 01:16:51.188 ---------------------- 01:16:51.188 Transport Type: 3 (TCP) 01:16:51.188 Address Family: 1 (IPv4) 01:16:51.188 Subsystem Type: 3 (Current Discovery Subsystem) 01:16:51.188 Entry Flags: 01:16:51.188 Duplicate Returned Information: 0 01:16:51.188 Explicit Persistent Connection Support for Discovery: 0 01:16:51.188 Transport Requirements: 01:16:51.188 Secure Channel: Not Specified 01:16:51.188 Port ID: 1 (0x0001) 01:16:51.188 Controller ID: 65535 (0xffff) 01:16:51.188 Admin Max SQ Size: 32 01:16:51.188 Transport Service Identifier: 4420 01:16:51.188 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 01:16:51.188 Transport Address: 10.0.0.1 01:16:51.188 Discovery Log Entry 1 01:16:51.188 ---------------------- 01:16:51.188 Transport Type: 3 (TCP) 01:16:51.188 Address Family: 1 (IPv4) 01:16:51.188 Subsystem Type: 2 (NVM Subsystem) 01:16:51.188 Entry Flags: 01:16:51.188 Duplicate Returned Information: 0 01:16:51.188 Explicit Persistent Connection Support for Discovery: 0 01:16:51.188 Transport Requirements: 01:16:51.188 Secure Channel: Not Specified 01:16:51.188 Port ID: 1 (0x0001) 01:16:51.188 Controller ID: 65535 (0xffff) 01:16:51.188 Admin Max SQ Size: 32 01:16:51.188 Transport Service Identifier: 4420 01:16:51.188 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 01:16:51.188 Transport Address: 10.0.0.1 01:16:51.188 11:13:56 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:16:51.446 get_feature(0x01) failed 01:16:51.446 get_feature(0x02) failed 01:16:51.446 get_feature(0x04) failed 01:16:51.446 ===================================================== 01:16:51.446 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:16:51.446 ===================================================== 01:16:51.446 Controller Capabilities/Features 01:16:51.446 ================================ 01:16:51.446 Vendor ID: 0000 01:16:51.446 Subsystem Vendor ID: 0000 01:16:51.446 Serial Number: 0d6496bccaae2b13d596 01:16:51.446 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 01:16:51.446 Firmware Version: 6.7.0-68 01:16:51.446 Recommended Arb Burst: 6 01:16:51.446 IEEE OUI Identifier: 00 00 00 01:16:51.446 Multi-path I/O 01:16:51.446 May have multiple subsystem ports: Yes 01:16:51.446 May have multiple controllers: Yes 01:16:51.446 Associated with SR-IOV VF: No 01:16:51.446 Max Data Transfer Size: Unlimited 01:16:51.446 Max Number of Namespaces: 1024 01:16:51.446 Max Number of I/O Queues: 128 01:16:51.446 NVMe Specification Version (VS): 1.3 01:16:51.446 NVMe Specification Version (Identify): 1.3 01:16:51.446 Maximum Queue Entries: 1024 01:16:51.446 Contiguous Queues Required: No 01:16:51.446 Arbitration Mechanisms Supported 01:16:51.446 Weighted Round Robin: Not Supported 01:16:51.446 Vendor Specific: Not Supported 01:16:51.446 Reset Timeout: 7500 ms 01:16:51.446 Doorbell Stride: 4 bytes 01:16:51.446 NVM Subsystem Reset: Not Supported 01:16:51.446 Command Sets Supported 01:16:51.446 NVM Command Set: Supported 01:16:51.446 Boot Partition: Not Supported 01:16:51.446 Memory Page Size Minimum: 4096 bytes 01:16:51.446 Memory Page Size Maximum: 4096 bytes 01:16:51.446 Persistent Memory Region: Not Supported 01:16:51.446 Optional Asynchronous Events Supported 01:16:51.446 Namespace Attribute Notices: Supported 01:16:51.446 Firmware Activation Notices: Not Supported 01:16:51.446 ANA Change Notices: Supported 01:16:51.446 PLE Aggregate Log Change Notices: Not Supported 01:16:51.446 LBA Status Info Alert Notices: Not Supported 01:16:51.446 EGE Aggregate Log Change Notices: Not Supported 01:16:51.446 Normal NVM Subsystem Shutdown event: Not Supported 01:16:51.446 Zone Descriptor Change Notices: Not Supported 01:16:51.446 Discovery Log Change Notices: Not Supported 01:16:51.446 Controller Attributes 01:16:51.446 128-bit Host Identifier: Supported 01:16:51.446 Non-Operational Permissive Mode: Not Supported 01:16:51.446 NVM Sets: Not Supported 01:16:51.446 Read Recovery Levels: Not Supported 01:16:51.446 Endurance Groups: Not Supported 01:16:51.446 Predictable Latency Mode: Not Supported 01:16:51.446 Traffic Based Keep ALive: Supported 01:16:51.446 Namespace Granularity: Not Supported 01:16:51.446 SQ Associations: Not Supported 01:16:51.446 UUID List: Not Supported 01:16:51.446 Multi-Domain Subsystem: Not Supported 01:16:51.446 Fixed Capacity Management: Not Supported 01:16:51.446 Variable Capacity Management: Not Supported 01:16:51.446 Delete Endurance Group: Not Supported 01:16:51.446 Delete NVM Set: Not Supported 01:16:51.446 Extended LBA Formats Supported: Not Supported 01:16:51.446 Flexible Data Placement Supported: Not Supported 01:16:51.446 01:16:51.446 Controller Memory Buffer Support 01:16:51.446 ================================ 01:16:51.446 Supported: No 01:16:51.446 01:16:51.446 Persistent Memory Region Support 01:16:51.446 ================================ 01:16:51.446 Supported: No 01:16:51.446 01:16:51.446 Admin Command Set Attributes 01:16:51.446 ============================ 01:16:51.446 Security Send/Receive: Not Supported 01:16:51.446 Format NVM: Not Supported 01:16:51.446 Firmware Activate/Download: Not Supported 01:16:51.446 Namespace Management: Not Supported 01:16:51.446 Device Self-Test: Not Supported 01:16:51.446 Directives: Not Supported 01:16:51.446 NVMe-MI: Not Supported 01:16:51.446 Virtualization Management: Not Supported 01:16:51.446 Doorbell Buffer Config: Not Supported 01:16:51.446 Get LBA Status Capability: Not Supported 01:16:51.446 Command & Feature Lockdown Capability: Not Supported 01:16:51.446 Abort Command Limit: 4 01:16:51.446 Async Event Request Limit: 4 01:16:51.446 Number of Firmware Slots: N/A 01:16:51.446 Firmware Slot 1 Read-Only: N/A 01:16:51.446 Firmware Activation Without Reset: N/A 01:16:51.446 Multiple Update Detection Support: N/A 01:16:51.446 Firmware Update Granularity: No Information Provided 01:16:51.446 Per-Namespace SMART Log: Yes 01:16:51.446 Asymmetric Namespace Access Log Page: Supported 01:16:51.446 ANA Transition Time : 10 sec 01:16:51.446 01:16:51.446 Asymmetric Namespace Access Capabilities 01:16:51.446 ANA Optimized State : Supported 01:16:51.446 ANA Non-Optimized State : Supported 01:16:51.446 ANA Inaccessible State : Supported 01:16:51.446 ANA Persistent Loss State : Supported 01:16:51.446 ANA Change State : Supported 01:16:51.446 ANAGRPID is not changed : No 01:16:51.446 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 01:16:51.446 01:16:51.446 ANA Group Identifier Maximum : 128 01:16:51.446 Number of ANA Group Identifiers : 128 01:16:51.446 Max Number of Allowed Namespaces : 1024 01:16:51.446 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 01:16:51.446 Command Effects Log Page: Supported 01:16:51.446 Get Log Page Extended Data: Supported 01:16:51.446 Telemetry Log Pages: Not Supported 01:16:51.446 Persistent Event Log Pages: Not Supported 01:16:51.446 Supported Log Pages Log Page: May Support 01:16:51.446 Commands Supported & Effects Log Page: Not Supported 01:16:51.446 Feature Identifiers & Effects Log Page:May Support 01:16:51.446 NVMe-MI Commands & Effects Log Page: May Support 01:16:51.446 Data Area 4 for Telemetry Log: Not Supported 01:16:51.446 Error Log Page Entries Supported: 128 01:16:51.446 Keep Alive: Supported 01:16:51.446 Keep Alive Granularity: 1000 ms 01:16:51.446 01:16:51.446 NVM Command Set Attributes 01:16:51.446 ========================== 01:16:51.446 Submission Queue Entry Size 01:16:51.446 Max: 64 01:16:51.446 Min: 64 01:16:51.446 Completion Queue Entry Size 01:16:51.446 Max: 16 01:16:51.446 Min: 16 01:16:51.446 Number of Namespaces: 1024 01:16:51.446 Compare Command: Not Supported 01:16:51.446 Write Uncorrectable Command: Not Supported 01:16:51.446 Dataset Management Command: Supported 01:16:51.446 Write Zeroes Command: Supported 01:16:51.446 Set Features Save Field: Not Supported 01:16:51.446 Reservations: Not Supported 01:16:51.447 Timestamp: Not Supported 01:16:51.447 Copy: Not Supported 01:16:51.447 Volatile Write Cache: Present 01:16:51.447 Atomic Write Unit (Normal): 1 01:16:51.447 Atomic Write Unit (PFail): 1 01:16:51.447 Atomic Compare & Write Unit: 1 01:16:51.447 Fused Compare & Write: Not Supported 01:16:51.447 Scatter-Gather List 01:16:51.447 SGL Command Set: Supported 01:16:51.447 SGL Keyed: Not Supported 01:16:51.447 SGL Bit Bucket Descriptor: Not Supported 01:16:51.447 SGL Metadata Pointer: Not Supported 01:16:51.447 Oversized SGL: Not Supported 01:16:51.447 SGL Metadata Address: Not Supported 01:16:51.447 SGL Offset: Supported 01:16:51.447 Transport SGL Data Block: Not Supported 01:16:51.447 Replay Protected Memory Block: Not Supported 01:16:51.447 01:16:51.447 Firmware Slot Information 01:16:51.447 ========================= 01:16:51.447 Active slot: 0 01:16:51.447 01:16:51.447 Asymmetric Namespace Access 01:16:51.447 =========================== 01:16:51.447 Change Count : 0 01:16:51.447 Number of ANA Group Descriptors : 1 01:16:51.447 ANA Group Descriptor : 0 01:16:51.447 ANA Group ID : 1 01:16:51.447 Number of NSID Values : 1 01:16:51.447 Change Count : 0 01:16:51.447 ANA State : 1 01:16:51.447 Namespace Identifier : 1 01:16:51.447 01:16:51.447 Commands Supported and Effects 01:16:51.447 ============================== 01:16:51.447 Admin Commands 01:16:51.447 -------------- 01:16:51.447 Get Log Page (02h): Supported 01:16:51.447 Identify (06h): Supported 01:16:51.447 Abort (08h): Supported 01:16:51.447 Set Features (09h): Supported 01:16:51.447 Get Features (0Ah): Supported 01:16:51.447 Asynchronous Event Request (0Ch): Supported 01:16:51.447 Keep Alive (18h): Supported 01:16:51.447 I/O Commands 01:16:51.447 ------------ 01:16:51.447 Flush (00h): Supported 01:16:51.447 Write (01h): Supported LBA-Change 01:16:51.447 Read (02h): Supported 01:16:51.447 Write Zeroes (08h): Supported LBA-Change 01:16:51.447 Dataset Management (09h): Supported 01:16:51.447 01:16:51.447 Error Log 01:16:51.447 ========= 01:16:51.447 Entry: 0 01:16:51.447 Error Count: 0x3 01:16:51.447 Submission Queue Id: 0x0 01:16:51.447 Command Id: 0x5 01:16:51.447 Phase Bit: 0 01:16:51.447 Status Code: 0x2 01:16:51.447 Status Code Type: 0x0 01:16:51.447 Do Not Retry: 1 01:16:51.447 Error Location: 0x28 01:16:51.447 LBA: 0x0 01:16:51.447 Namespace: 0x0 01:16:51.447 Vendor Log Page: 0x0 01:16:51.447 ----------- 01:16:51.447 Entry: 1 01:16:51.447 Error Count: 0x2 01:16:51.447 Submission Queue Id: 0x0 01:16:51.447 Command Id: 0x5 01:16:51.447 Phase Bit: 0 01:16:51.447 Status Code: 0x2 01:16:51.447 Status Code Type: 0x0 01:16:51.447 Do Not Retry: 1 01:16:51.447 Error Location: 0x28 01:16:51.447 LBA: 0x0 01:16:51.447 Namespace: 0x0 01:16:51.447 Vendor Log Page: 0x0 01:16:51.447 ----------- 01:16:51.447 Entry: 2 01:16:51.447 Error Count: 0x1 01:16:51.447 Submission Queue Id: 0x0 01:16:51.447 Command Id: 0x4 01:16:51.447 Phase Bit: 0 01:16:51.447 Status Code: 0x2 01:16:51.447 Status Code Type: 0x0 01:16:51.447 Do Not Retry: 1 01:16:51.447 Error Location: 0x28 01:16:51.447 LBA: 0x0 01:16:51.447 Namespace: 0x0 01:16:51.447 Vendor Log Page: 0x0 01:16:51.447 01:16:51.447 Number of Queues 01:16:51.447 ================ 01:16:51.447 Number of I/O Submission Queues: 128 01:16:51.447 Number of I/O Completion Queues: 128 01:16:51.447 01:16:51.447 ZNS Specific Controller Data 01:16:51.447 ============================ 01:16:51.447 Zone Append Size Limit: 0 01:16:51.447 01:16:51.447 01:16:51.447 Active Namespaces 01:16:51.447 ================= 01:16:51.447 get_feature(0x05) failed 01:16:51.447 Namespace ID:1 01:16:51.447 Command Set Identifier: NVM (00h) 01:16:51.447 Deallocate: Supported 01:16:51.447 Deallocated/Unwritten Error: Not Supported 01:16:51.447 Deallocated Read Value: Unknown 01:16:51.447 Deallocate in Write Zeroes: Not Supported 01:16:51.447 Deallocated Guard Field: 0xFFFF 01:16:51.447 Flush: Supported 01:16:51.447 Reservation: Not Supported 01:16:51.447 Namespace Sharing Capabilities: Multiple Controllers 01:16:51.447 Size (in LBAs): 1310720 (5GiB) 01:16:51.447 Capacity (in LBAs): 1310720 (5GiB) 01:16:51.447 Utilization (in LBAs): 1310720 (5GiB) 01:16:51.447 UUID: bd0b4eae-0925-4b7a-ba7a-49067a753b29 01:16:51.447 Thin Provisioning: Not Supported 01:16:51.447 Per-NS Atomic Units: Yes 01:16:51.447 Atomic Boundary Size (Normal): 0 01:16:51.447 Atomic Boundary Size (PFail): 0 01:16:51.447 Atomic Boundary Offset: 0 01:16:51.447 NGUID/EUI64 Never Reused: No 01:16:51.447 ANA group ID: 1 01:16:51.447 Namespace Write Protected: No 01:16:51.447 Number of LBA Formats: 1 01:16:51.447 Current LBA Format: LBA Format #00 01:16:51.447 LBA Format #00: Data Size: 4096 Metadata Size: 0 01:16:51.447 01:16:51.447 11:13:56 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 01:16:51.447 11:13:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 01:16:51.447 11:13:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 01:16:51.447 11:13:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:16:51.447 11:13:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 01:16:51.447 11:13:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 01:16:51.447 11:13:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:16:51.447 rmmod nvme_tcp 01:16:51.447 rmmod nvme_fabrics 01:16:51.447 11:13:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:16:51.447 11:13:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 01:16:51.447 11:13:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 01:16:51.447 11:13:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 01:16:51.447 11:13:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:16:51.447 11:13:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:16:51.447 11:13:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:16:51.447 11:13:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:16:51.447 11:13:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 01:16:51.447 11:13:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:16:51.447 11:13:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:16:51.447 11:13:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:16:51.447 11:13:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:16:51.447 11:13:56 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 01:16:51.447 11:13:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 01:16:51.447 11:13:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 01:16:51.447 11:13:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 01:16:51.447 11:13:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:16:51.447 11:13:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 01:16:51.447 11:13:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:16:51.447 11:13:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 01:16:51.447 11:13:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 01:16:51.705 11:13:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:16:52.269 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:16:52.526 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:16:52.526 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:16:52.784 ************************************ 01:16:52.784 END TEST nvmf_identify_kernel_target 01:16:52.784 ************************************ 01:16:52.784 01:16:52.784 real 0m3.530s 01:16:52.784 user 0m1.165s 01:16:52.784 sys 0m1.854s 01:16:52.784 11:13:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 01:16:52.784 11:13:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 01:16:52.784 11:13:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:16:52.784 11:13:57 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 01:16:52.784 11:13:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:16:52.784 11:13:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:16:52.784 11:13:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:16:52.784 ************************************ 01:16:52.784 START TEST nvmf_auth_host 01:16:52.784 ************************************ 01:16:52.784 11:13:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 01:16:52.784 * Looking for test storage... 01:16:52.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:16:52.784 11:13:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:16:52.784 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 01:16:52.784 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:16:52.784 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:16:52.784 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:16:52.784 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:16:52.784 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:16:52.784 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:16:52.784 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:16:52.785 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:16:53.050 11:13:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:16:53.050 Cannot find device "nvmf_tgt_br" 01:16:53.050 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 01:16:53.050 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:16:53.050 Cannot find device "nvmf_tgt_br2" 01:16:53.050 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 01:16:53.050 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:16:53.050 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:16:53.050 Cannot find device "nvmf_tgt_br" 01:16:53.050 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 01:16:53.050 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:16:53.050 Cannot find device "nvmf_tgt_br2" 01:16:53.050 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 01:16:53.050 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:16:53.050 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:16:53.050 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:16:53.050 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:16:53.050 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 01:16:53.050 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:16:53.050 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:16:53.050 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 01:16:53.050 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:16:53.050 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:16:53.050 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:16:53.050 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:16:53.050 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:16:53.050 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:16:53.050 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:16:53.050 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:16:53.050 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:16:53.050 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:16:53.050 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:16:53.050 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:16:53.050 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:16:53.050 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:16:53.050 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:16:53.323 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:16:53.323 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:16:53.323 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:16:53.323 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:16:53.323 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:16:53.323 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:16:53.323 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:16:53.323 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:16:53.323 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:16:53.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:16:53.323 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 01:16:53.323 01:16:53.323 --- 10.0.0.2 ping statistics --- 01:16:53.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:16:53.323 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 01:16:53.323 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:16:53.323 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:16:53.323 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 01:16:53.323 01:16:53.323 --- 10.0.0.3 ping statistics --- 01:16:53.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:16:53.323 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 01:16:53.323 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:16:53.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:16:53.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 01:16:53.323 01:16:53.323 --- 10.0.0.1 ping statistics --- 01:16:53.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:16:53.323 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 01:16:53.323 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:16:53.323 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 01:16:53.323 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:16:53.323 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:16:53.323 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:16:53.323 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:16:53.323 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:16:53.323 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:16:53.323 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:16:53.323 11:13:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 01:16:53.323 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:16:53.323 11:13:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 01:16:53.323 11:13:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:53.323 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=93282 01:16:53.323 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 01:16:53.323 11:13:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 93282 01:16:53.323 11:13:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 93282 ']' 01:16:53.323 11:13:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:16:53.323 11:13:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 01:16:53.323 11:13:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:16:53.323 11:13:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 01:16:53.323 11:13:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0d9a36d1208c46b43b6fede017e2b26f 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.TOU 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0d9a36d1208c46b43b6fede017e2b26f 0 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0d9a36d1208c46b43b6fede017e2b26f 0 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0d9a36d1208c46b43b6fede017e2b26f 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.TOU 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.TOU 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.TOU 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1d3c14e248b0ba44da8d1e4f16ed5d4f30f3b18ebff1673638f91836f5bafd76 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.hxB 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1d3c14e248b0ba44da8d1e4f16ed5d4f30f3b18ebff1673638f91836f5bafd76 3 01:16:54.258 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1d3c14e248b0ba44da8d1e4f16ed5d4f30f3b18ebff1673638f91836f5bafd76 3 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1d3c14e248b0ba44da8d1e4f16ed5d4f30f3b18ebff1673638f91836f5bafd76 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.hxB 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.hxB 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.hxB 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a9f84bc0eac75a3e4921e526d0fca73b201572a403da9fe2 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.dRa 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a9f84bc0eac75a3e4921e526d0fca73b201572a403da9fe2 0 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a9f84bc0eac75a3e4921e526d0fca73b201572a403da9fe2 0 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a9f84bc0eac75a3e4921e526d0fca73b201572a403da9fe2 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.dRa 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.dRa 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.dRa 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fa79d5b99beb3332c870a76cd56cd53b8f2220e9de034f89 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.hwp 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fa79d5b99beb3332c870a76cd56cd53b8f2220e9de034f89 2 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fa79d5b99beb3332c870a76cd56cd53b8f2220e9de034f89 2 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fa79d5b99beb3332c870a76cd56cd53b8f2220e9de034f89 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.hwp 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.hwp 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.hwp 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b41c5c51fbb2adf780a800a074150c1c 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.TeF 01:16:54.517 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b41c5c51fbb2adf780a800a074150c1c 1 01:16:54.518 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b41c5c51fbb2adf780a800a074150c1c 1 01:16:54.518 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 01:16:54.518 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:16:54.518 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b41c5c51fbb2adf780a800a074150c1c 01:16:54.518 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 01:16:54.518 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 01:16:54.518 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.TeF 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.TeF 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.TeF 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=070891e83ff19897aaf05b7dbe272b32 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.VlD 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 070891e83ff19897aaf05b7dbe272b32 1 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 070891e83ff19897aaf05b7dbe272b32 1 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=070891e83ff19897aaf05b7dbe272b32 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.VlD 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.VlD 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.VlD 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9392100c20e2af628079ca2b48240278f474588c0b217921 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.QvK 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9392100c20e2af628079ca2b48240278f474588c0b217921 2 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9392100c20e2af628079ca2b48240278f474588c0b217921 2 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9392100c20e2af628079ca2b48240278f474588c0b217921 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.QvK 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.QvK 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.QvK 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=eb0d36c66d9df2f0749ac8bedd68e55a 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.OEM 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key eb0d36c66d9df2f0749ac8bedd68e55a 0 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 eb0d36c66d9df2f0749ac8bedd68e55a 0 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=eb0d36c66d9df2f0749ac8bedd68e55a 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.OEM 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.OEM 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.OEM 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8f3b3fbd0e76e7028f0f6516c96c3738b5c94ce0734a8ec11f3325ec8d9d1071 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.PT0 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8f3b3fbd0e76e7028f0f6516c96c3738b5c94ce0734a8ec11f3325ec8d9d1071 3 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8f3b3fbd0e76e7028f0f6516c96c3738b5c94ce0734a8ec11f3325ec8d9d1071 3 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8f3b3fbd0e76e7028f0f6516c96c3738b5c94ce0734a8ec11f3325ec8d9d1071 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 01:16:54.777 11:13:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 01:16:55.036 11:14:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.PT0 01:16:55.036 11:14:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.PT0 01:16:55.036 11:14:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.PT0 01:16:55.036 11:14:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 01:16:55.036 11:14:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 93282 01:16:55.036 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 93282 ']' 01:16:55.036 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:16:55.036 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 01:16:55.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:16:55.036 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:16:55.036 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 01:16:55.036 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:55.036 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:16:55.036 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 01:16:55.036 11:14:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:16:55.036 11:14:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.TOU 01:16:55.036 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:55.036 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:55.036 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:55.036 11:14:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.hxB ]] 01:16:55.036 11:14:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hxB 01:16:55.036 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:55.036 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:55.036 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:55.036 11:14:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:16:55.036 11:14:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.dRa 01:16:55.036 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:55.036 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:55.295 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:55.295 11:14:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.hwp ]] 01:16:55.295 11:14:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hwp 01:16:55.295 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:55.295 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:55.295 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:55.295 11:14:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:16:55.295 11:14:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.TeF 01:16:55.295 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:55.295 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:55.295 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:55.295 11:14:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.VlD ]] 01:16:55.295 11:14:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.VlD 01:16:55.295 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:55.295 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:55.295 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:55.295 11:14:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:16:55.295 11:14:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.QvK 01:16:55.295 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:55.295 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.OEM ]] 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.OEM 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.PT0 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 01:16:55.296 11:14:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:16:55.878 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:16:55.878 Waiting for block devices as requested 01:16:55.878 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:16:56.137 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:16:56.705 11:14:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:16:56.705 11:14:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 01:16:56.705 11:14:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 01:16:56.705 11:14:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 01:16:56.705 11:14:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:16:56.705 11:14:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:16:56.705 11:14:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 01:16:56.705 11:14:01 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 01:16:56.705 11:14:01 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 01:16:56.964 No valid GPT data, bailing 01:16:56.964 11:14:01 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:16:56.964 11:14:01 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 01:16:56.964 11:14:01 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 01:16:56.964 11:14:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 01:16:56.964 11:14:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:16:56.964 11:14:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 01:16:56.964 11:14:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 01:16:56.964 11:14:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 01:16:56.964 11:14:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 01:16:56.964 11:14:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:16:56.964 11:14:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 01:16:56.964 11:14:01 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 01:16:56.964 11:14:01 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 01:16:56.964 No valid GPT data, bailing 01:16:56.964 11:14:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 01:16:56.964 11:14:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 01:16:56.964 11:14:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 01:16:56.964 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 01:16:56.964 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:16:56.964 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 01:16:56.964 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 01:16:56.964 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 01:16:56.964 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 01:16:56.964 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:16:56.964 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 01:16:56.964 11:14:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 01:16:56.964 11:14:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 01:16:56.964 No valid GPT data, bailing 01:16:56.964 11:14:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 01:16:56.964 11:14:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 01:16:56.964 11:14:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 01:16:56.964 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 01:16:56.964 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:16:56.964 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 01:16:56.964 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 01:16:56.964 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 01:16:56.964 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:16:56.964 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:16:56.964 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 01:16:56.964 11:14:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 01:16:56.964 11:14:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 01:16:57.223 No valid GPT data, bailing 01:16:57.223 11:14:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 01:16:57.223 11:14:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 01:16:57.223 11:14:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 01:16:57.223 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 01:16:57.223 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 01:16:57.223 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 01:16:57.223 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 01:16:57.223 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 01:16:57.223 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 01:16:57.223 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 01:16:57.223 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 01:16:57.223 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 01:16:57.223 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 01:16:57.223 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 01:16:57.223 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 01:16:57.223 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 01:16:57.223 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 01:16:57.224 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid=7758934d-ca6b-403e-9e5d-3518ecb16acb -a 10.0.0.1 -t tcp -s 4420 01:16:57.224 01:16:57.224 Discovery Log Number of Records 2, Generation counter 2 01:16:57.224 =====Discovery Log Entry 0====== 01:16:57.224 trtype: tcp 01:16:57.224 adrfam: ipv4 01:16:57.224 subtype: current discovery subsystem 01:16:57.224 treq: not specified, sq flow control disable supported 01:16:57.224 portid: 1 01:16:57.224 trsvcid: 4420 01:16:57.224 subnqn: nqn.2014-08.org.nvmexpress.discovery 01:16:57.224 traddr: 10.0.0.1 01:16:57.224 eflags: none 01:16:57.224 sectype: none 01:16:57.224 =====Discovery Log Entry 1====== 01:16:57.224 trtype: tcp 01:16:57.224 adrfam: ipv4 01:16:57.224 subtype: nvme subsystem 01:16:57.224 treq: not specified, sq flow control disable supported 01:16:57.224 portid: 1 01:16:57.224 trsvcid: 4420 01:16:57.224 subnqn: nqn.2024-02.io.spdk:cnode0 01:16:57.224 traddr: 10.0.0.1 01:16:57.224 eflags: none 01:16:57.224 sectype: none 01:16:57.224 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 01:16:57.224 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 01:16:57.224 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 01:16:57.224 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 01:16:57.224 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:16:57.224 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:16:57.224 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:16:57.224 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:16:57.224 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:16:57.224 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:16:57.224 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:16:57.224 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:16:57.224 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:16:57.224 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: ]] 01:16:57.224 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:16:57.224 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 01:16:57.224 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 01:16:57.224 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 01:16:57.224 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:16:57.224 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 01:16:57.224 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:16:57.224 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 01:16:57.224 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:16:57.224 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:16:57.224 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:16:57.224 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:16:57.224 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:57.224 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:57.484 nvme0n1 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQ5YTM2ZDEyMDhjNDZiNDNiNmZlZGUwMTdlMmIyNmaU/HfT: 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQ5YTM2ZDEyMDhjNDZiNDNiNmZlZGUwMTdlMmIyNmaU/HfT: 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: ]] 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:57.484 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:57.744 nvme0n1 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: ]] 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:57.744 nvme0n1 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:57.744 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQxYzVjNTFmYmIyYWRmNzgwYTgwMGEwNzQxNTBjMWNkMeMr: 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQxYzVjNTFmYmIyYWRmNzgwYTgwMGEwNzQxNTBjMWNkMeMr: 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: ]] 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:58.003 11:14:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:58.003 nvme0n1 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTM5MjEwMGMyMGUyYWY2MjgwNzljYTJiNDgyNDAyNzhmNDc0NTg4YzBiMjE3OTIxLV/1xg==: 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTM5MjEwMGMyMGUyYWY2MjgwNzljYTJiNDgyNDAyNzhmNDc0NTg4YzBiMjE3OTIxLV/1xg==: 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: ]] 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:16:58.003 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:16:58.004 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:16:58.004 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:16:58.004 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:16:58.004 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:16:58.004 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:16:58.004 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:16:58.004 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:16:58.004 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:16:58.004 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:58.004 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:58.262 nvme0n1 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGYzYjNmYmQwZTc2ZTcwMjhmMGY2NTE2Yzk2YzM3MzhiNWM5NGNlMDczNGE4ZWMxMWYzMzI1ZWM4ZDlkMTA3MTArsWg=: 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGYzYjNmYmQwZTc2ZTcwMjhmMGY2NTE2Yzk2YzM3MzhiNWM5NGNlMDczNGE4ZWMxMWYzMzI1ZWM4ZDlkMTA3MTArsWg=: 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:58.262 nvme0n1 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:58.262 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:58.519 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:16:58.519 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:16:58.519 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:58.519 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:58.519 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:58.519 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:16:58.519 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:16:58.519 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 01:16:58.519 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:16:58.519 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:16:58.519 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:16:58.519 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:16:58.519 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQ5YTM2ZDEyMDhjNDZiNDNiNmZlZGUwMTdlMmIyNmaU/HfT: 01:16:58.519 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: 01:16:58.519 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:16:58.519 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:16:58.519 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQ5YTM2ZDEyMDhjNDZiNDNiNmZlZGUwMTdlMmIyNmaU/HfT: 01:16:58.520 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: ]] 01:16:58.520 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: 01:16:58.520 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 01:16:58.520 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:16:58.520 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:16:58.520 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:16:58.520 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:16:58.520 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:16:58.520 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:16:58.520 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:58.520 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:58.520 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:58.520 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:16:58.520 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:16:58.520 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:16:58.520 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:16:58.520 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:16:58.520 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:16:58.520 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:16:58.520 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:16:58.520 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:16:58.520 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:16:58.520 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:16:58.520 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:16:58.520 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:58.520 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:58.778 nvme0n1 01:16:58.778 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:58.778 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:16:58.778 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:16:58.778 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:58.778 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:58.778 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:58.778 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:16:58.778 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:16:58.778 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:58.778 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:58.778 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:58.778 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:16:58.778 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 01:16:58.778 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:16:58.778 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:16:58.778 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:16:58.778 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:16:58.778 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:16:58.778 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:16:58.778 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:16:58.778 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:16:58.778 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:16:58.779 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: ]] 01:16:58.779 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:16:58.779 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 01:16:58.779 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:16:58.779 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:16:58.779 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:16:58.779 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:16:58.779 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:16:58.779 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:16:58.779 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:58.779 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:58.779 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:58.779 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:16:58.779 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:16:58.779 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:16:58.779 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:16:58.779 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:16:58.779 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:16:58.779 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:16:58.779 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:16:58.779 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:16:58.779 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:16:58.779 11:14:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:16:58.779 11:14:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:16:58.779 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:58.779 11:14:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:59.037 nvme0n1 01:16:59.037 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:59.037 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:16:59.037 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:59.037 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:16:59.037 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:59.037 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:59.037 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:16:59.037 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:16:59.037 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:59.037 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:59.037 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:59.037 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:16:59.037 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 01:16:59.037 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:16:59.037 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:16:59.037 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:16:59.037 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:16:59.037 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQxYzVjNTFmYmIyYWRmNzgwYTgwMGEwNzQxNTBjMWNkMeMr: 01:16:59.037 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: 01:16:59.037 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:16:59.037 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:16:59.037 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQxYzVjNTFmYmIyYWRmNzgwYTgwMGEwNzQxNTBjMWNkMeMr: 01:16:59.037 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: ]] 01:16:59.037 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: 01:16:59.037 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 01:16:59.037 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:16:59.037 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:16:59.037 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:16:59.037 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:16:59.037 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:16:59.037 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:16:59.038 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:59.038 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:59.038 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:59.038 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:16:59.038 11:14:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:16:59.038 11:14:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:16:59.038 11:14:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:16:59.038 11:14:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:16:59.038 11:14:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:16:59.038 11:14:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:16:59.038 11:14:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:16:59.038 11:14:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:16:59.038 11:14:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:16:59.038 11:14:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:16:59.038 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:16:59.038 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:59.038 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:59.038 nvme0n1 01:16:59.038 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:59.038 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:16:59.038 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:59.038 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:16:59.038 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTM5MjEwMGMyMGUyYWY2MjgwNzljYTJiNDgyNDAyNzhmNDc0NTg4YzBiMjE3OTIxLV/1xg==: 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTM5MjEwMGMyMGUyYWY2MjgwNzljYTJiNDgyNDAyNzhmNDc0NTg4YzBiMjE3OTIxLV/1xg==: 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: ]] 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:59.297 nvme0n1 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGYzYjNmYmQwZTc2ZTcwMjhmMGY2NTE2Yzk2YzM3MzhiNWM5NGNlMDczNGE4ZWMxMWYzMzI1ZWM4ZDlkMTA3MTArsWg=: 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGYzYjNmYmQwZTc2ZTcwMjhmMGY2NTE2Yzk2YzM3MzhiNWM5NGNlMDczNGE4ZWMxMWYzMzI1ZWM4ZDlkMTA3MTArsWg=: 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:16:59.297 11:14:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:16:59.298 11:14:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:16:59.298 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:16:59.298 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:59.298 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:59.555 nvme0n1 01:16:59.555 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:59.555 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:16:59.555 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:59.555 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:16:59.555 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:59.555 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:59.555 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:16:59.555 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:16:59.555 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:59.555 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:16:59.555 11:14:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:59.555 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:16:59.555 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:16:59.555 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 01:16:59.555 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:16:59.555 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:16:59.555 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:16:59.555 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:16:59.555 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQ5YTM2ZDEyMDhjNDZiNDNiNmZlZGUwMTdlMmIyNmaU/HfT: 01:16:59.555 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: 01:16:59.555 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:16:59.555 11:14:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:17:00.121 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQ5YTM2ZDEyMDhjNDZiNDNiNmZlZGUwMTdlMmIyNmaU/HfT: 01:17:00.121 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: ]] 01:17:00.121 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: 01:17:00.121 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 01:17:00.121 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:00.121 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:17:00.121 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:17:00.121 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:17:00.121 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:00.121 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:17:00.121 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:00.121 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:00.121 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:00.121 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:00.121 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:00.121 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:00.121 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:00.121 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:00.121 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:00.121 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:00.121 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:00.121 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:00.121 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:00.121 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:00.121 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:17:00.121 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:00.121 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:00.121 nvme0n1 01:17:00.121 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:00.121 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:00.121 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:00.121 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:00.121 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:00.121 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:00.379 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:00.379 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:00.379 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:00.379 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:00.379 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:00.379 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:00.379 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 01:17:00.379 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:00.379 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:17:00.379 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:17:00.379 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:17:00.379 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:17:00.379 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:17:00.379 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:17:00.379 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:17:00.379 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:17:00.379 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: ]] 01:17:00.379 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:17:00.379 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 01:17:00.379 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:00.379 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:17:00.379 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:17:00.379 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:17:00.379 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:00.379 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:17:00.379 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:00.379 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:00.379 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:00.379 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:00.379 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:00.379 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:00.380 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:00.380 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:00.380 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:00.380 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:00.380 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:00.380 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:00.380 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:00.380 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:00.380 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:17:00.380 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:00.380 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:00.380 nvme0n1 01:17:00.380 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:00.380 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:00.380 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:00.380 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:00.380 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:00.380 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:00.637 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:00.637 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:00.637 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:00.637 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:00.637 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:00.637 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:00.637 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 01:17:00.637 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:00.637 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:17:00.637 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQxYzVjNTFmYmIyYWRmNzgwYTgwMGEwNzQxNTBjMWNkMeMr: 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQxYzVjNTFmYmIyYWRmNzgwYTgwMGEwNzQxNTBjMWNkMeMr: 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: ]] 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:00.638 nvme0n1 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:00.638 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTM5MjEwMGMyMGUyYWY2MjgwNzljYTJiNDgyNDAyNzhmNDc0NTg4YzBiMjE3OTIxLV/1xg==: 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTM5MjEwMGMyMGUyYWY2MjgwNzljYTJiNDgyNDAyNzhmNDc0NTg4YzBiMjE3OTIxLV/1xg==: 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: ]] 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:00.897 11:14:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:00.897 nvme0n1 01:17:00.897 11:14:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:00.897 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:00.897 11:14:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:00.897 11:14:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:00.897 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:00.897 11:14:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGYzYjNmYmQwZTc2ZTcwMjhmMGY2NTE2Yzk2YzM3MzhiNWM5NGNlMDczNGE4ZWMxMWYzMzI1ZWM4ZDlkMTA3MTArsWg=: 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGYzYjNmYmQwZTc2ZTcwMjhmMGY2NTE2Yzk2YzM3MzhiNWM5NGNlMDczNGE4ZWMxMWYzMzI1ZWM4ZDlkMTA3MTArsWg=: 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:01.156 nvme0n1 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:01.156 11:14:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:01.416 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:01.416 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:01.416 11:14:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:01.416 11:14:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:01.416 11:14:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:01.416 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:17:01.416 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:01.416 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 01:17:01.416 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:01.416 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:17:01.416 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:17:01.416 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:17:01.416 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQ5YTM2ZDEyMDhjNDZiNDNiNmZlZGUwMTdlMmIyNmaU/HfT: 01:17:01.416 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: 01:17:01.416 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:17:01.416 11:14:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:17:02.811 11:14:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQ5YTM2ZDEyMDhjNDZiNDNiNmZlZGUwMTdlMmIyNmaU/HfT: 01:17:02.811 11:14:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: ]] 01:17:02.811 11:14:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: 01:17:02.811 11:14:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 01:17:02.811 11:14:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:02.811 11:14:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:17:02.811 11:14:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:17:02.811 11:14:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:17:02.811 11:14:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:02.811 11:14:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:17:02.811 11:14:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:02.811 11:14:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:02.811 11:14:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:02.811 11:14:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:02.811 11:14:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:02.811 11:14:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:02.811 11:14:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:02.811 11:14:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:02.811 11:14:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:02.811 11:14:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:02.811 11:14:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:02.811 11:14:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:02.811 11:14:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:02.811 11:14:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:02.811 11:14:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:17:02.811 11:14:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:02.811 11:14:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:03.069 nvme0n1 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: ]] 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:03.069 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:03.363 nvme0n1 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQxYzVjNTFmYmIyYWRmNzgwYTgwMGEwNzQxNTBjMWNkMeMr: 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQxYzVjNTFmYmIyYWRmNzgwYTgwMGEwNzQxNTBjMWNkMeMr: 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: ]] 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:03.363 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:03.937 nvme0n1 01:17:03.937 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:03.937 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:03.937 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:03.937 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:03.937 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:03.937 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:03.937 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:03.937 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:03.937 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:03.937 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:03.937 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:03.937 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:03.937 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTM5MjEwMGMyMGUyYWY2MjgwNzljYTJiNDgyNDAyNzhmNDc0NTg4YzBiMjE3OTIxLV/1xg==: 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTM5MjEwMGMyMGUyYWY2MjgwNzljYTJiNDgyNDAyNzhmNDc0NTg4YzBiMjE3OTIxLV/1xg==: 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: ]] 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:03.938 11:14:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:04.197 nvme0n1 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGYzYjNmYmQwZTc2ZTcwMjhmMGY2NTE2Yzk2YzM3MzhiNWM5NGNlMDczNGE4ZWMxMWYzMzI1ZWM4ZDlkMTA3MTArsWg=: 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGYzYjNmYmQwZTc2ZTcwMjhmMGY2NTE2Yzk2YzM3MzhiNWM5NGNlMDczNGE4ZWMxMWYzMzI1ZWM4ZDlkMTA3MTArsWg=: 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:04.197 11:14:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:04.455 nvme0n1 01:17:04.455 11:14:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:04.455 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:04.455 11:14:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:04.455 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:04.455 11:14:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:04.455 11:14:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQ5YTM2ZDEyMDhjNDZiNDNiNmZlZGUwMTdlMmIyNmaU/HfT: 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQ5YTM2ZDEyMDhjNDZiNDNiNmZlZGUwMTdlMmIyNmaU/HfT: 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: ]] 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:04.714 11:14:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:05.282 nvme0n1 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: ]] 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:05.282 11:14:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:05.850 nvme0n1 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQxYzVjNTFmYmIyYWRmNzgwYTgwMGEwNzQxNTBjMWNkMeMr: 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQxYzVjNTFmYmIyYWRmNzgwYTgwMGEwNzQxNTBjMWNkMeMr: 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: ]] 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:05.850 11:14:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:06.417 nvme0n1 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTM5MjEwMGMyMGUyYWY2MjgwNzljYTJiNDgyNDAyNzhmNDc0NTg4YzBiMjE3OTIxLV/1xg==: 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTM5MjEwMGMyMGUyYWY2MjgwNzljYTJiNDgyNDAyNzhmNDc0NTg4YzBiMjE3OTIxLV/1xg==: 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: ]] 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:06.417 11:14:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:06.983 nvme0n1 01:17:06.983 11:14:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:06.983 11:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:06.983 11:14:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:06.983 11:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:06.983 11:14:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:06.983 11:14:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:06.983 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:06.983 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGYzYjNmYmQwZTc2ZTcwMjhmMGY2NTE2Yzk2YzM3MzhiNWM5NGNlMDczNGE4ZWMxMWYzMzI1ZWM4ZDlkMTA3MTArsWg=: 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGYzYjNmYmQwZTc2ZTcwMjhmMGY2NTE2Yzk2YzM3MzhiNWM5NGNlMDczNGE4ZWMxMWYzMzI1ZWM4ZDlkMTA3MTArsWg=: 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:06.984 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:07.549 nvme0n1 01:17:07.549 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:07.549 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:07.549 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:07.549 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:07.549 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:07.549 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:07.549 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:07.549 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:07.549 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:07.549 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:07.549 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:07.549 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 01:17:07.549 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:17:07.549 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:07.549 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 01:17:07.549 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:07.549 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:17:07.549 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:17:07.549 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQ5YTM2ZDEyMDhjNDZiNDNiNmZlZGUwMTdlMmIyNmaU/HfT: 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQ5YTM2ZDEyMDhjNDZiNDNiNmZlZGUwMTdlMmIyNmaU/HfT: 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: ]] 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:07.550 nvme0n1 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:07.550 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: ]] 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:07.809 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:07.810 nvme0n1 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQxYzVjNTFmYmIyYWRmNzgwYTgwMGEwNzQxNTBjMWNkMeMr: 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQxYzVjNTFmYmIyYWRmNzgwYTgwMGEwNzQxNTBjMWNkMeMr: 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: ]] 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:07.810 11:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:08.069 nvme0n1 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTM5MjEwMGMyMGUyYWY2MjgwNzljYTJiNDgyNDAyNzhmNDc0NTg4YzBiMjE3OTIxLV/1xg==: 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTM5MjEwMGMyMGUyYWY2MjgwNzljYTJiNDgyNDAyNzhmNDc0NTg4YzBiMjE3OTIxLV/1xg==: 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: ]] 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:08.069 nvme0n1 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:08.069 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGYzYjNmYmQwZTc2ZTcwMjhmMGY2NTE2Yzk2YzM3MzhiNWM5NGNlMDczNGE4ZWMxMWYzMzI1ZWM4ZDlkMTA3MTArsWg=: 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGYzYjNmYmQwZTc2ZTcwMjhmMGY2NTE2Yzk2YzM3MzhiNWM5NGNlMDczNGE4ZWMxMWYzMzI1ZWM4ZDlkMTA3MTArsWg=: 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:08.327 nvme0n1 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQ5YTM2ZDEyMDhjNDZiNDNiNmZlZGUwMTdlMmIyNmaU/HfT: 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQ5YTM2ZDEyMDhjNDZiNDNiNmZlZGUwMTdlMmIyNmaU/HfT: 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: ]] 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:08.327 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:08.328 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:17:08.328 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:08.328 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:08.585 nvme0n1 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: ]] 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:08.585 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:08.843 nvme0n1 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQxYzVjNTFmYmIyYWRmNzgwYTgwMGEwNzQxNTBjMWNkMeMr: 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQxYzVjNTFmYmIyYWRmNzgwYTgwMGEwNzQxNTBjMWNkMeMr: 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: ]] 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:08.843 11:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:08.843 nvme0n1 01:17:08.843 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:08.843 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:08.843 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:08.843 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:08.843 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:08.843 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTM5MjEwMGMyMGUyYWY2MjgwNzljYTJiNDgyNDAyNzhmNDc0NTg4YzBiMjE3OTIxLV/1xg==: 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTM5MjEwMGMyMGUyYWY2MjgwNzljYTJiNDgyNDAyNzhmNDc0NTg4YzBiMjE3OTIxLV/1xg==: 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: ]] 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:17:09.101 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:09.102 nvme0n1 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGYzYjNmYmQwZTc2ZTcwMjhmMGY2NTE2Yzk2YzM3MzhiNWM5NGNlMDczNGE4ZWMxMWYzMzI1ZWM4ZDlkMTA3MTArsWg=: 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGYzYjNmYmQwZTc2ZTcwMjhmMGY2NTE2Yzk2YzM3MzhiNWM5NGNlMDczNGE4ZWMxMWYzMzI1ZWM4ZDlkMTA3MTArsWg=: 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:09.102 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:09.360 nvme0n1 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQ5YTM2ZDEyMDhjNDZiNDNiNmZlZGUwMTdlMmIyNmaU/HfT: 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQ5YTM2ZDEyMDhjNDZiNDNiNmZlZGUwMTdlMmIyNmaU/HfT: 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: ]] 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:09.360 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:09.619 nvme0n1 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: ]] 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:09.619 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:09.620 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:17:09.620 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:09.620 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:09.877 nvme0n1 01:17:09.877 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:09.877 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:09.877 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:09.877 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:09.877 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:09.877 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:09.877 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:09.877 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:09.877 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:09.877 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:09.877 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:09.877 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:09.877 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 01:17:09.877 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:09.877 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:17:09.877 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:17:09.877 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:17:09.877 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQxYzVjNTFmYmIyYWRmNzgwYTgwMGEwNzQxNTBjMWNkMeMr: 01:17:09.877 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: 01:17:09.877 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:17:09.877 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:17:09.877 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQxYzVjNTFmYmIyYWRmNzgwYTgwMGEwNzQxNTBjMWNkMeMr: 01:17:09.877 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: ]] 01:17:09.877 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: 01:17:09.877 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 01:17:09.877 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:09.877 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:17:09.877 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:17:09.878 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:17:09.878 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:09.878 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:17:09.878 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:09.878 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:09.878 11:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:09.878 11:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:09.878 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:09.878 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:09.878 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:09.878 11:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:09.878 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:09.878 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:09.878 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:09.878 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:09.878 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:09.878 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:09.878 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:17:09.878 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:09.878 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:10.136 nvme0n1 01:17:10.136 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:10.136 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:10.136 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:10.136 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:10.136 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:10.136 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:10.136 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:10.136 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:10.136 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:10.136 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:10.136 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:10.136 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:10.136 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 01:17:10.136 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:10.136 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:17:10.136 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:17:10.136 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:17:10.136 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTM5MjEwMGMyMGUyYWY2MjgwNzljYTJiNDgyNDAyNzhmNDc0NTg4YzBiMjE3OTIxLV/1xg==: 01:17:10.136 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: 01:17:10.136 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:17:10.136 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:17:10.136 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTM5MjEwMGMyMGUyYWY2MjgwNzljYTJiNDgyNDAyNzhmNDc0NTg4YzBiMjE3OTIxLV/1xg==: 01:17:10.136 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: ]] 01:17:10.137 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: 01:17:10.137 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 01:17:10.137 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:10.137 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:17:10.137 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:17:10.137 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:17:10.137 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:10.137 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:17:10.137 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:10.137 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:10.137 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:10.137 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:10.137 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:10.137 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:10.137 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:10.137 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:10.137 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:10.137 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:10.137 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:10.137 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:10.137 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:10.137 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:10.137 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:17:10.137 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:10.137 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:10.395 nvme0n1 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGYzYjNmYmQwZTc2ZTcwMjhmMGY2NTE2Yzk2YzM3MzhiNWM5NGNlMDczNGE4ZWMxMWYzMzI1ZWM4ZDlkMTA3MTArsWg=: 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGYzYjNmYmQwZTc2ZTcwMjhmMGY2NTE2Yzk2YzM3MzhiNWM5NGNlMDczNGE4ZWMxMWYzMzI1ZWM4ZDlkMTA3MTArsWg=: 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:10.395 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:10.654 nvme0n1 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQ5YTM2ZDEyMDhjNDZiNDNiNmZlZGUwMTdlMmIyNmaU/HfT: 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQ5YTM2ZDEyMDhjNDZiNDNiNmZlZGUwMTdlMmIyNmaU/HfT: 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: ]] 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:10.654 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:10.655 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:10.655 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:10.655 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:10.655 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:10.655 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:10.655 11:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:10.655 11:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:17:10.655 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:10.655 11:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:10.913 nvme0n1 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: ]] 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:10.913 11:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:10.914 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:17:10.914 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:10.914 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:11.478 nvme0n1 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQxYzVjNTFmYmIyYWRmNzgwYTgwMGEwNzQxNTBjMWNkMeMr: 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQxYzVjNTFmYmIyYWRmNzgwYTgwMGEwNzQxNTBjMWNkMeMr: 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: ]] 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:11.478 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:11.761 nvme0n1 01:17:11.761 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:11.761 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:11.761 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:11.761 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:11.761 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:11.761 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:11.761 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:11.761 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:11.761 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:11.761 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:11.761 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:11.761 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:11.761 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 01:17:11.761 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:11.761 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:17:11.761 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:17:11.761 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:17:11.761 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTM5MjEwMGMyMGUyYWY2MjgwNzljYTJiNDgyNDAyNzhmNDc0NTg4YzBiMjE3OTIxLV/1xg==: 01:17:11.761 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: 01:17:11.761 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:17:11.761 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:17:11.761 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTM5MjEwMGMyMGUyYWY2MjgwNzljYTJiNDgyNDAyNzhmNDc0NTg4YzBiMjE3OTIxLV/1xg==: 01:17:11.761 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: ]] 01:17:11.761 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: 01:17:11.762 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 01:17:11.762 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:11.762 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:17:11.762 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:17:11.762 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:17:11.762 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:11.762 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:17:11.762 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:11.762 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:11.762 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:11.762 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:11.762 11:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:11.762 11:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:11.762 11:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:11.762 11:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:11.762 11:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:11.762 11:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:11.762 11:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:11.762 11:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:11.762 11:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:11.762 11:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:11.762 11:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:17:11.762 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:11.762 11:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:12.034 nvme0n1 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGYzYjNmYmQwZTc2ZTcwMjhmMGY2NTE2Yzk2YzM3MzhiNWM5NGNlMDczNGE4ZWMxMWYzMzI1ZWM4ZDlkMTA3MTArsWg=: 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGYzYjNmYmQwZTc2ZTcwMjhmMGY2NTE2Yzk2YzM3MzhiNWM5NGNlMDczNGE4ZWMxMWYzMzI1ZWM4ZDlkMTA3MTArsWg=: 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:12.034 11:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:12.602 nvme0n1 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQ5YTM2ZDEyMDhjNDZiNDNiNmZlZGUwMTdlMmIyNmaU/HfT: 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQ5YTM2ZDEyMDhjNDZiNDNiNmZlZGUwMTdlMmIyNmaU/HfT: 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: ]] 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:12.602 11:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:13.170 nvme0n1 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: ]] 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:13.170 11:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:13.738 nvme0n1 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQxYzVjNTFmYmIyYWRmNzgwYTgwMGEwNzQxNTBjMWNkMeMr: 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQxYzVjNTFmYmIyYWRmNzgwYTgwMGEwNzQxNTBjMWNkMeMr: 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: ]] 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:13.738 11:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:14.305 nvme0n1 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTM5MjEwMGMyMGUyYWY2MjgwNzljYTJiNDgyNDAyNzhmNDc0NTg4YzBiMjE3OTIxLV/1xg==: 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTM5MjEwMGMyMGUyYWY2MjgwNzljYTJiNDgyNDAyNzhmNDc0NTg4YzBiMjE3OTIxLV/1xg==: 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: ]] 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:14.305 11:14:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:14.870 nvme0n1 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGYzYjNmYmQwZTc2ZTcwMjhmMGY2NTE2Yzk2YzM3MzhiNWM5NGNlMDczNGE4ZWMxMWYzMzI1ZWM4ZDlkMTA3MTArsWg=: 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGYzYjNmYmQwZTc2ZTcwMjhmMGY2NTE2Yzk2YzM3MzhiNWM5NGNlMDczNGE4ZWMxMWYzMzI1ZWM4ZDlkMTA3MTArsWg=: 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:14.870 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:14.871 11:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:14.871 11:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:14.871 11:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:14.871 11:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:14.871 11:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:14.871 11:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:14.871 11:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:14.871 11:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:14.871 11:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:14.871 11:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:14.871 11:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:17:14.871 11:14:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:14.871 11:14:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:15.436 nvme0n1 01:17:15.436 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:15.436 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:15.436 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQ5YTM2ZDEyMDhjNDZiNDNiNmZlZGUwMTdlMmIyNmaU/HfT: 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQ5YTM2ZDEyMDhjNDZiNDNiNmZlZGUwMTdlMmIyNmaU/HfT: 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: ]] 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:15.437 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:15.695 nvme0n1 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: ]] 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:15.695 nvme0n1 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 01:17:15.695 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:15.696 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:17:15.696 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:17:15.696 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:17:15.696 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQxYzVjNTFmYmIyYWRmNzgwYTgwMGEwNzQxNTBjMWNkMeMr: 01:17:15.696 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: 01:17:15.696 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:17:15.696 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:17:15.696 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQxYzVjNTFmYmIyYWRmNzgwYTgwMGEwNzQxNTBjMWNkMeMr: 01:17:15.696 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: ]] 01:17:15.696 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: 01:17:15.696 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 01:17:15.696 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:15.696 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:17:15.696 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:17:15.696 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:17:15.696 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:15.696 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:17:15.696 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:15.696 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:15.955 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:15.955 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:15.955 11:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:15.955 11:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:15.955 11:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:15.955 11:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:15.955 11:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:15.955 11:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:15.955 11:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:15.955 11:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:15.955 11:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:15.955 11:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:15.955 11:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:17:15.955 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:15.955 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:15.955 nvme0n1 01:17:15.955 11:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTM5MjEwMGMyMGUyYWY2MjgwNzljYTJiNDgyNDAyNzhmNDc0NTg4YzBiMjE3OTIxLV/1xg==: 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTM5MjEwMGMyMGUyYWY2MjgwNzljYTJiNDgyNDAyNzhmNDc0NTg4YzBiMjE3OTIxLV/1xg==: 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: ]] 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:15.955 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:16.213 nvme0n1 01:17:16.213 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:16.213 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:16.213 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:16.213 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:16.213 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:16.213 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:16.213 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:16.213 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:16.213 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:16.213 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:16.213 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:16.213 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:16.213 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 01:17:16.213 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:16.213 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:17:16.213 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:17:16.213 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:17:16.213 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGYzYjNmYmQwZTc2ZTcwMjhmMGY2NTE2Yzk2YzM3MzhiNWM5NGNlMDczNGE4ZWMxMWYzMzI1ZWM4ZDlkMTA3MTArsWg=: 01:17:16.213 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:17:16.213 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:17:16.213 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:17:16.213 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGYzYjNmYmQwZTc2ZTcwMjhmMGY2NTE2Yzk2YzM3MzhiNWM5NGNlMDczNGE4ZWMxMWYzMzI1ZWM4ZDlkMTA3MTArsWg=: 01:17:16.213 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:17:16.213 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 01:17:16.213 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:16.213 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:17:16.213 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:17:16.213 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:17:16.213 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:16.213 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:16.214 nvme0n1 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQ5YTM2ZDEyMDhjNDZiNDNiNmZlZGUwMTdlMmIyNmaU/HfT: 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQ5YTM2ZDEyMDhjNDZiNDNiNmZlZGUwMTdlMmIyNmaU/HfT: 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: ]] 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:16.214 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:16.482 nvme0n1 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: ]] 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:16.482 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:16.740 nvme0n1 01:17:16.740 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:16.740 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:16.740 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:16.740 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:16.740 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:16.740 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:16.740 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:16.740 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:16.740 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:16.740 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:16.740 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:16.740 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:16.740 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 01:17:16.740 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:16.740 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:17:16.740 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:17:16.740 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:17:16.740 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQxYzVjNTFmYmIyYWRmNzgwYTgwMGEwNzQxNTBjMWNkMeMr: 01:17:16.740 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: 01:17:16.740 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:17:16.740 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:17:16.741 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQxYzVjNTFmYmIyYWRmNzgwYTgwMGEwNzQxNTBjMWNkMeMr: 01:17:16.741 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: ]] 01:17:16.741 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: 01:17:16.741 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 01:17:16.741 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:16.741 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:17:16.741 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:17:16.741 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:17:16.741 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:16.741 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:17:16.741 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:16.741 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:16.741 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:16.741 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:16.741 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:16.741 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:16.741 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:16.741 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:16.741 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:16.741 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:16.741 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:16.741 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:16.741 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:16.741 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:16.741 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:17:16.741 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:16.741 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:16.741 nvme0n1 01:17:16.741 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:16.741 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:16.741 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:16.741 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:16.741 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:16.741 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTM5MjEwMGMyMGUyYWY2MjgwNzljYTJiNDgyNDAyNzhmNDc0NTg4YzBiMjE3OTIxLV/1xg==: 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTM5MjEwMGMyMGUyYWY2MjgwNzljYTJiNDgyNDAyNzhmNDc0NTg4YzBiMjE3OTIxLV/1xg==: 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: ]] 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:16.999 11:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:16.999 nvme0n1 01:17:16.999 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:16.999 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:16.999 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:16.999 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:16.999 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:16.999 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:16.999 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:16.999 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:16.999 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:16.999 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:16.999 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:16.999 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:16.999 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 01:17:16.999 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:16.999 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:17:16.999 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:17:16.999 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:17:16.999 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGYzYjNmYmQwZTc2ZTcwMjhmMGY2NTE2Yzk2YzM3MzhiNWM5NGNlMDczNGE4ZWMxMWYzMzI1ZWM4ZDlkMTA3MTArsWg=: 01:17:16.999 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:17:16.999 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:17:16.999 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:17:16.999 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGYzYjNmYmQwZTc2ZTcwMjhmMGY2NTE2Yzk2YzM3MzhiNWM5NGNlMDczNGE4ZWMxMWYzMzI1ZWM4ZDlkMTA3MTArsWg=: 01:17:16.999 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:17:16.999 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 01:17:16.999 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:16.999 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:17:16.999 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:17:16.999 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:17:16.999 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:16.999 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:17:16.999 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:16.999 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:17.257 nvme0n1 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQ5YTM2ZDEyMDhjNDZiNDNiNmZlZGUwMTdlMmIyNmaU/HfT: 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQ5YTM2ZDEyMDhjNDZiNDNiNmZlZGUwMTdlMmIyNmaU/HfT: 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: ]] 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:17.257 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:17.516 nvme0n1 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: ]] 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:17.516 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:17.775 nvme0n1 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQxYzVjNTFmYmIyYWRmNzgwYTgwMGEwNzQxNTBjMWNkMeMr: 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQxYzVjNTFmYmIyYWRmNzgwYTgwMGEwNzQxNTBjMWNkMeMr: 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: ]] 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:17.775 11:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:18.033 nvme0n1 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTM5MjEwMGMyMGUyYWY2MjgwNzljYTJiNDgyNDAyNzhmNDc0NTg4YzBiMjE3OTIxLV/1xg==: 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTM5MjEwMGMyMGUyYWY2MjgwNzljYTJiNDgyNDAyNzhmNDc0NTg4YzBiMjE3OTIxLV/1xg==: 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: ]] 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:18.033 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:18.291 nvme0n1 01:17:18.291 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:18.291 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:18.291 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:18.291 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:18.291 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:18.291 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:18.291 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:18.291 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:18.291 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:18.291 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:18.291 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:18.291 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:18.291 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 01:17:18.291 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:18.291 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:17:18.291 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:17:18.291 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:17:18.291 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGYzYjNmYmQwZTc2ZTcwMjhmMGY2NTE2Yzk2YzM3MzhiNWM5NGNlMDczNGE4ZWMxMWYzMzI1ZWM4ZDlkMTA3MTArsWg=: 01:17:18.291 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:17:18.292 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:17:18.292 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:17:18.292 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGYzYjNmYmQwZTc2ZTcwMjhmMGY2NTE2Yzk2YzM3MzhiNWM5NGNlMDczNGE4ZWMxMWYzMzI1ZWM4ZDlkMTA3MTArsWg=: 01:17:18.292 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:17:18.292 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 01:17:18.292 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:18.292 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:17:18.292 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:17:18.292 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:17:18.292 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:18.292 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:17:18.292 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:18.292 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:18.292 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:18.292 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:18.292 11:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:18.292 11:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:18.292 11:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:18.292 11:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:18.292 11:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:18.292 11:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:18.292 11:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:18.292 11:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:18.292 11:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:18.292 11:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:18.292 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:17:18.292 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:18.292 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:18.551 nvme0n1 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQ5YTM2ZDEyMDhjNDZiNDNiNmZlZGUwMTdlMmIyNmaU/HfT: 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQ5YTM2ZDEyMDhjNDZiNDNiNmZlZGUwMTdlMmIyNmaU/HfT: 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: ]] 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:18.551 11:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:18.552 11:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:17:18.552 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:18.552 11:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:18.810 nvme0n1 01:17:18.810 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:18.810 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:18.810 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:18.810 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:18.810 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: ]] 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:19.068 11:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:19.069 11:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:19.069 11:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:19.069 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:17:19.069 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:19.069 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:19.327 nvme0n1 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQxYzVjNTFmYmIyYWRmNzgwYTgwMGEwNzQxNTBjMWNkMeMr: 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQxYzVjNTFmYmIyYWRmNzgwYTgwMGEwNzQxNTBjMWNkMeMr: 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: ]] 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:19.327 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:19.586 nvme0n1 01:17:19.586 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:19.586 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:19.586 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:19.586 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:19.586 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:19.847 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:19.847 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:19.847 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:19.847 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTM5MjEwMGMyMGUyYWY2MjgwNzljYTJiNDgyNDAyNzhmNDc0NTg4YzBiMjE3OTIxLV/1xg==: 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTM5MjEwMGMyMGUyYWY2MjgwNzljYTJiNDgyNDAyNzhmNDc0NTg4YzBiMjE3OTIxLV/1xg==: 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: ]] 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:19.848 11:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:20.108 nvme0n1 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGYzYjNmYmQwZTc2ZTcwMjhmMGY2NTE2Yzk2YzM3MzhiNWM5NGNlMDczNGE4ZWMxMWYzMzI1ZWM4ZDlkMTA3MTArsWg=: 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGYzYjNmYmQwZTc2ZTcwMjhmMGY2NTE2Yzk2YzM3MzhiNWM5NGNlMDczNGE4ZWMxMWYzMzI1ZWM4ZDlkMTA3MTArsWg=: 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:20.108 11:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:20.677 nvme0n1 01:17:20.677 11:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:20.677 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:20.677 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:20.677 11:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:20.677 11:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:20.677 11:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:20.677 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:20.677 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:20.677 11:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:20.677 11:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:20.677 11:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:20.677 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:17:20.677 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:20.677 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 01:17:20.677 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:20.677 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:17:20.677 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:17:20.677 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:17:20.677 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQ5YTM2ZDEyMDhjNDZiNDNiNmZlZGUwMTdlMmIyNmaU/HfT: 01:17:20.677 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: 01:17:20.677 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:17:20.677 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:17:20.677 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQ5YTM2ZDEyMDhjNDZiNDNiNmZlZGUwMTdlMmIyNmaU/HfT: 01:17:20.677 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: ]] 01:17:20.677 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQzYzE0ZTI0OGIwYmE0NGRhOGQxZTRmMTZlZDVkNGYzMGYzYjE4ZWJmZjE2NzM2MzhmOTE4MzZmNWJhZmQ3NrRxXzU=: 01:17:20.677 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 01:17:20.677 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:20.677 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:17:20.677 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:17:20.677 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:17:20.677 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:20.678 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:17:20.678 11:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:20.678 11:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:20.678 11:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:20.678 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:20.678 11:14:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:20.678 11:14:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:20.678 11:14:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:20.678 11:14:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:20.678 11:14:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:20.678 11:14:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:20.678 11:14:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:20.678 11:14:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:20.678 11:14:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:20.678 11:14:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:20.678 11:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:17:20.678 11:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:20.678 11:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:21.247 nvme0n1 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: ]] 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:21.247 11:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:21.817 nvme0n1 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQxYzVjNTFmYmIyYWRmNzgwYTgwMGEwNzQxNTBjMWNkMeMr: 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQxYzVjNTFmYmIyYWRmNzgwYTgwMGEwNzQxNTBjMWNkMeMr: 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: ]] 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDcwODkxZTgzZmYxOTg5N2FhZjA1YjdkYmUyNzJiMzLNjeq/: 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:21.817 11:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:22.385 nvme0n1 01:17:22.385 11:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:22.385 11:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:22.385 11:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:22.385 11:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:22.385 11:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:22.385 11:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:22.385 11:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:22.385 11:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:22.385 11:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:22.385 11:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTM5MjEwMGMyMGUyYWY2MjgwNzljYTJiNDgyNDAyNzhmNDc0NTg4YzBiMjE3OTIxLV/1xg==: 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTM5MjEwMGMyMGUyYWY2MjgwNzljYTJiNDgyNDAyNzhmNDc0NTg4YzBiMjE3OTIxLV/1xg==: 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: ]] 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWIwZDM2YzY2ZDlkZjJmMDc0OWFjOGJlZGQ2OGU1NWH3+AdQ: 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:22.386 11:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:22.952 nvme0n1 01:17:22.952 11:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:22.952 11:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:22.952 11:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:22.952 11:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:22.952 11:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:22.952 11:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:22.952 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:22.952 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:22.952 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:22.952 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:22.952 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:22.952 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:17:22.952 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 01:17:22.952 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:22.952 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:17:22.952 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:17:22.952 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:17:22.953 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGYzYjNmYmQwZTc2ZTcwMjhmMGY2NTE2Yzk2YzM3MzhiNWM5NGNlMDczNGE4ZWMxMWYzMzI1ZWM4ZDlkMTA3MTArsWg=: 01:17:22.953 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:17:22.953 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:17:22.953 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:17:22.953 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGYzYjNmYmQwZTc2ZTcwMjhmMGY2NTE2Yzk2YzM3MzhiNWM5NGNlMDczNGE4ZWMxMWYzMzI1ZWM4ZDlkMTA3MTArsWg=: 01:17:22.953 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:17:22.953 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 01:17:22.953 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:17:22.953 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:17:22.953 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:17:22.953 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:17:22.953 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:17:22.953 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:17:22.953 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:22.953 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:22.953 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:22.953 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:17:22.953 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:22.953 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:22.953 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:22.953 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:22.953 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:22.953 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:22.953 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:22.953 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:22.953 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:22.953 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:22.953 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:17:22.953 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:22.953 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:23.522 nvme0n1 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlmODRiYzBlYWM3NWEzZTQ5MjFlNTI2ZDBmY2E3M2IyMDE1NzJhNDAzZGE5ZmUy2vpmWw==: 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: ]] 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmE3OWQ1Yjk5YmViMzMzMmM4NzBhNzZjZDU2Y2Q1M2I4ZjIyMjBlOWRlMDM0Zjg5qbnyQA==: 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:23.522 request: 01:17:23.522 { 01:17:23.522 "name": "nvme0", 01:17:23.522 "trtype": "tcp", 01:17:23.522 "traddr": "10.0.0.1", 01:17:23.522 "adrfam": "ipv4", 01:17:23.522 "trsvcid": "4420", 01:17:23.522 "subnqn": "nqn.2024-02.io.spdk:cnode0", 01:17:23.522 "hostnqn": "nqn.2024-02.io.spdk:host0", 01:17:23.522 "prchk_reftag": false, 01:17:23.522 "prchk_guard": false, 01:17:23.522 "hdgst": false, 01:17:23.522 "ddgst": false, 01:17:23.522 "method": "bdev_nvme_attach_controller", 01:17:23.522 "req_id": 1 01:17:23.522 } 01:17:23.522 Got JSON-RPC error response 01:17:23.522 response: 01:17:23.522 { 01:17:23.522 "code": -5, 01:17:23.522 "message": "Input/output error" 01:17:23.522 } 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:23.522 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:23.523 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:23.523 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 01:17:23.523 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 01:17:23.523 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:23.523 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:23.523 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:23.523 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:23.523 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:23.523 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:23.523 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:23.523 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:23.523 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:23.523 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:23.523 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 01:17:23.523 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 01:17:23.523 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 01:17:23.523 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:17:23.523 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:17:23.523 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:23.782 request: 01:17:23.782 { 01:17:23.782 "name": "nvme0", 01:17:23.782 "trtype": "tcp", 01:17:23.782 "traddr": "10.0.0.1", 01:17:23.782 "adrfam": "ipv4", 01:17:23.782 "trsvcid": "4420", 01:17:23.782 "subnqn": "nqn.2024-02.io.spdk:cnode0", 01:17:23.782 "hostnqn": "nqn.2024-02.io.spdk:host0", 01:17:23.782 "prchk_reftag": false, 01:17:23.782 "prchk_guard": false, 01:17:23.782 "hdgst": false, 01:17:23.782 "ddgst": false, 01:17:23.782 "dhchap_key": "key2", 01:17:23.782 "method": "bdev_nvme_attach_controller", 01:17:23.782 "req_id": 1 01:17:23.782 } 01:17:23.782 Got JSON-RPC error response 01:17:23.782 response: 01:17:23.782 { 01:17:23.782 "code": -5, 01:17:23.782 "message": "Input/output error" 01:17:23.782 } 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:23.782 request: 01:17:23.782 { 01:17:23.782 "name": "nvme0", 01:17:23.782 "trtype": "tcp", 01:17:23.782 "traddr": "10.0.0.1", 01:17:23.782 "adrfam": "ipv4", 01:17:23.782 "trsvcid": "4420", 01:17:23.782 "subnqn": "nqn.2024-02.io.spdk:cnode0", 01:17:23.782 "hostnqn": "nqn.2024-02.io.spdk:host0", 01:17:23.782 "prchk_reftag": false, 01:17:23.782 "prchk_guard": false, 01:17:23.782 "hdgst": false, 01:17:23.782 "ddgst": false, 01:17:23.782 "dhchap_key": "key1", 01:17:23.782 "dhchap_ctrlr_key": "ckey2", 01:17:23.782 "method": "bdev_nvme_attach_controller", 01:17:23.782 "req_id": 1 01:17:23.782 } 01:17:23.782 Got JSON-RPC error response 01:17:23.782 response: 01:17:23.782 { 01:17:23.782 "code": -5, 01:17:23.782 "message": "Input/output error" 01:17:23.782 } 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 01:17:23.782 11:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:17:23.782 rmmod nvme_tcp 01:17:23.782 rmmod nvme_fabrics 01:17:24.040 11:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:17:24.040 11:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 01:17:24.040 11:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 01:17:24.040 11:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 93282 ']' 01:17:24.040 11:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 93282 01:17:24.040 11:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 93282 ']' 01:17:24.040 11:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 93282 01:17:24.040 11:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 01:17:24.040 11:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:17:24.040 11:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93282 01:17:24.041 killing process with pid 93282 01:17:24.041 11:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:17:24.041 11:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:17:24.041 11:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93282' 01:17:24.041 11:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 93282 01:17:24.041 11:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 93282 01:17:24.298 11:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:17:24.298 11:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:17:24.298 11:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:17:24.298 11:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:17:24.298 11:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 01:17:24.298 11:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:17:24.298 11:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:17:24.298 11:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:17:24.298 11:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:17:24.298 11:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 01:17:24.298 11:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 01:17:24.298 11:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 01:17:24.298 11:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 01:17:24.298 11:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 01:17:24.298 11:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 01:17:24.298 11:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 01:17:24.298 11:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 01:17:24.298 11:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 01:17:24.298 11:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 01:17:24.299 11:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 01:17:24.557 11:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:17:25.125 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:17:25.384 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:17:25.384 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:17:25.384 11:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.TOU /tmp/spdk.key-null.dRa /tmp/spdk.key-sha256.TeF /tmp/spdk.key-sha384.QvK /tmp/spdk.key-sha512.PT0 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 01:17:25.384 11:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:17:25.951 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:17:25.951 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 01:17:25.951 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 01:17:25.951 01:17:25.951 real 0m33.346s 01:17:25.951 user 0m30.506s 01:17:25.951 sys 0m5.133s 01:17:25.951 11:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 01:17:25.951 11:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:17:25.951 ************************************ 01:17:25.951 END TEST nvmf_auth_host 01:17:25.951 ************************************ 01:17:26.210 11:14:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:17:26.210 11:14:31 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 01:17:26.210 11:14:31 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 01:17:26.210 11:14:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:17:26.210 11:14:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:17:26.210 11:14:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:17:26.210 ************************************ 01:17:26.210 START TEST nvmf_digest 01:17:26.210 ************************************ 01:17:26.210 11:14:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 01:17:26.210 * Looking for test storage... 01:17:26.210 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:17:26.210 11:14:31 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:17:26.210 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 01:17:26.210 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:17:26.210 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:17:26.210 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:17:26.210 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:17:26.210 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:17:26.210 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:17:26.210 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:17:26.210 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:17:26.210 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:17:26.210 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:17:26.210 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:17:26.210 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:17:26.210 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:17:26.210 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:17:26.210 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:17:26.210 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:17:26.210 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:17:26.210 11:14:31 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:17:26.210 11:14:31 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:17:26.211 Cannot find device "nvmf_tgt_br" 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:17:26.211 Cannot find device "nvmf_tgt_br2" 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 01:17:26.211 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:17:26.470 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:17:26.470 Cannot find device "nvmf_tgt_br" 01:17:26.470 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 01:17:26.470 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:17:26.470 Cannot find device "nvmf_tgt_br2" 01:17:26.470 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 01:17:26.470 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:17:26.470 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:17:26.470 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:17:26.470 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:17:26.470 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 01:17:26.470 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:17:26.470 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:17:26.470 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 01:17:26.470 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:17:26.470 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:17:26.470 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:17:26.470 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:17:26.470 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:17:26.470 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:17:26.470 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:17:26.470 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:17:26.470 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:17:26.470 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:17:26.470 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:17:26.470 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:17:26.470 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:17:26.470 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:17:26.470 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:17:26.470 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:17:26.470 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:17:26.470 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:17:26.470 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:17:26.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:17:26.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 01:17:26.729 01:17:26.729 --- 10.0.0.2 ping statistics --- 01:17:26.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:26.729 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:17:26.729 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:17:26.729 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 01:17:26.729 01:17:26.729 --- 10.0.0.3 ping statistics --- 01:17:26.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:26.729 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:17:26.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:17:26.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 01:17:26.729 01:17:26.729 --- 10.0.0.1 ping statistics --- 01:17:26.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:26.729 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 01:17:26.729 ************************************ 01:17:26.729 START TEST nvmf_digest_clean 01:17:26.729 ************************************ 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=94838 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 94838 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 94838 ']' 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 01:17:26.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 01:17:26.729 11:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:17:26.729 [2024-07-22 11:14:31.842261] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:17:26.729 [2024-07-22 11:14:31.842328] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:17:26.988 [2024-07-22 11:14:31.985291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:26.988 [2024-07-22 11:14:32.026995] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:17:26.988 [2024-07-22 11:14:32.027247] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:17:26.988 [2024-07-22 11:14:32.027380] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:17:26.988 [2024-07-22 11:14:32.027391] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:17:26.988 [2024-07-22 11:14:32.027398] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:17:26.988 [2024-07-22 11:14:32.027428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:17:27.557 11:14:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:17:27.557 11:14:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 01:17:27.557 11:14:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:17:27.557 11:14:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 01:17:27.557 11:14:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:17:27.557 11:14:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:17:27.557 11:14:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 01:17:27.557 11:14:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 01:17:27.557 11:14:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 01:17:27.557 11:14:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:27.557 11:14:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:17:27.557 [2024-07-22 11:14:32.759826] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:17:27.816 null0 01:17:27.816 [2024-07-22 11:14:32.801339] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:17:27.816 [2024-07-22 11:14:32.825375] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:17:27.816 11:14:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:27.816 11:14:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 01:17:27.816 11:14:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 01:17:27.816 11:14:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 01:17:27.816 11:14:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 01:17:27.816 11:14:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 01:17:27.816 11:14:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 01:17:27.816 11:14:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 01:17:27.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:17:27.816 11:14:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94869 01:17:27.816 11:14:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94869 /var/tmp/bperf.sock 01:17:27.816 11:14:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 01:17:27.816 11:14:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 94869 ']' 01:17:27.816 11:14:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 01:17:27.816 11:14:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 01:17:27.816 11:14:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:17:27.816 11:14:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 01:17:27.816 11:14:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:17:27.816 [2024-07-22 11:14:32.883461] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:17:27.816 [2024-07-22 11:14:32.883722] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94869 ] 01:17:28.074 [2024-07-22 11:14:33.025753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:28.074 [2024-07-22 11:14:33.070531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:17:28.639 11:14:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:17:28.640 11:14:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 01:17:28.640 11:14:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 01:17:28.640 11:14:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 01:17:28.640 11:14:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:17:28.898 [2024-07-22 11:14:33.910565] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:17:28.898 11:14:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:17:28.898 11:14:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:17:29.190 nvme0n1 01:17:29.190 11:14:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 01:17:29.190 11:14:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:17:29.190 Running I/O for 2 seconds... 01:17:31.722 01:17:31.722 Latency(us) 01:17:31.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:17:31.722 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 01:17:31.722 nvme0n1 : 2.00 19207.42 75.03 0.00 0.00 6659.81 6316.72 16844.59 01:17:31.722 =================================================================================================================== 01:17:31.722 Total : 19207.42 75.03 0.00 0.00 6659.81 6316.72 16844.59 01:17:31.722 0 01:17:31.722 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 01:17:31.722 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 01:17:31.722 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 01:17:31.722 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 01:17:31.722 | select(.opcode=="crc32c") 01:17:31.722 | "\(.module_name) \(.executed)"' 01:17:31.722 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 01:17:31.722 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 01:17:31.722 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 01:17:31.722 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 01:17:31.722 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:17:31.722 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94869 01:17:31.722 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 94869 ']' 01:17:31.722 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 94869 01:17:31.722 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 01:17:31.722 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:17:31.722 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94869 01:17:31.722 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:17:31.722 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:17:31.722 killing process with pid 94869 01:17:31.722 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94869' 01:17:31.722 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 94869 01:17:31.722 Received shutdown signal, test time was about 2.000000 seconds 01:17:31.722 01:17:31.722 Latency(us) 01:17:31.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:17:31.722 =================================================================================================================== 01:17:31.722 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:17:31.722 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 94869 01:17:31.722 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 01:17:31.722 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 01:17:31.722 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 01:17:31.722 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 01:17:31.722 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 01:17:31.722 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 01:17:31.722 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 01:17:31.723 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 01:17:31.723 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94926 01:17:31.723 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94926 /var/tmp/bperf.sock 01:17:31.723 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 94926 ']' 01:17:31.723 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 01:17:31.723 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 01:17:31.723 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:17:31.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:17:31.723 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 01:17:31.723 11:14:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:17:31.723 [2024-07-22 11:14:36.801921] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:17:31.723 [2024-07-22 11:14:36.802198] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 01:17:31.723 Zero copy mechanism will not be used. 01:17:31.723 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94926 ] 01:17:31.980 [2024-07-22 11:14:36.945659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:31.980 [2024-07-22 11:14:36.994796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:17:32.544 11:14:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:17:32.544 11:14:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 01:17:32.544 11:14:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 01:17:32.544 11:14:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 01:17:32.544 11:14:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:17:32.801 [2024-07-22 11:14:37.943901] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:17:32.801 11:14:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:17:32.801 11:14:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:17:33.058 nvme0n1 01:17:33.317 11:14:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 01:17:33.317 11:14:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:17:33.317 I/O size of 131072 is greater than zero copy threshold (65536). 01:17:33.317 Zero copy mechanism will not be used. 01:17:33.317 Running I/O for 2 seconds... 01:17:35.215 01:17:35.215 Latency(us) 01:17:35.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:17:35.215 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 01:17:35.215 nvme0n1 : 2.00 5814.91 726.86 0.00 0.00 2748.82 2553.01 7948.54 01:17:35.215 =================================================================================================================== 01:17:35.215 Total : 5814.91 726.86 0.00 0.00 2748.82 2553.01 7948.54 01:17:35.215 0 01:17:35.215 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 01:17:35.215 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 01:17:35.215 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 01:17:35.215 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 01:17:35.215 | select(.opcode=="crc32c") 01:17:35.215 | "\(.module_name) \(.executed)"' 01:17:35.215 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 01:17:35.474 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 01:17:35.474 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 01:17:35.474 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 01:17:35.474 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:17:35.474 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94926 01:17:35.474 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 94926 ']' 01:17:35.474 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 94926 01:17:35.474 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 01:17:35.474 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:17:35.474 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94926 01:17:35.474 killing process with pid 94926 01:17:35.474 Received shutdown signal, test time was about 2.000000 seconds 01:17:35.474 01:17:35.474 Latency(us) 01:17:35.474 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:17:35.474 =================================================================================================================== 01:17:35.474 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:17:35.474 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:17:35.474 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:17:35.474 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94926' 01:17:35.474 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 94926 01:17:35.474 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 94926 01:17:35.733 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 01:17:35.733 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 01:17:35.733 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 01:17:35.733 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 01:17:35.733 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 01:17:35.733 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 01:17:35.733 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 01:17:35.733 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 01:17:35.733 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94981 01:17:35.733 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94981 /var/tmp/bperf.sock 01:17:35.733 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 94981 ']' 01:17:35.733 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 01:17:35.733 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 01:17:35.733 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:17:35.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:17:35.733 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 01:17:35.733 11:14:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:17:35.733 [2024-07-22 11:14:40.856278] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:17:35.733 [2024-07-22 11:14:40.856561] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94981 ] 01:17:35.993 [2024-07-22 11:14:40.999889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:35.993 [2024-07-22 11:14:41.049112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:17:36.560 11:14:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:17:36.560 11:14:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 01:17:36.560 11:14:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 01:17:36.560 11:14:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 01:17:36.560 11:14:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:17:36.819 [2024-07-22 11:14:41.949542] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:17:36.819 11:14:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:17:36.819 11:14:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:17:37.076 nvme0n1 01:17:37.076 11:14:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 01:17:37.076 11:14:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:17:37.333 Running I/O for 2 seconds... 01:17:39.295 01:17:39.295 Latency(us) 01:17:39.295 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:17:39.295 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:17:39.295 nvme0n1 : 2.00 20285.91 79.24 0.00 0.00 6305.08 2013.46 11843.86 01:17:39.295 =================================================================================================================== 01:17:39.295 Total : 20285.91 79.24 0.00 0.00 6305.08 2013.46 11843.86 01:17:39.295 0 01:17:39.295 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 01:17:39.295 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 01:17:39.295 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 01:17:39.295 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 01:17:39.295 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 01:17:39.295 | select(.opcode=="crc32c") 01:17:39.295 | "\(.module_name) \(.executed)"' 01:17:39.553 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 01:17:39.553 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 01:17:39.553 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 01:17:39.553 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:17:39.553 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94981 01:17:39.553 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 94981 ']' 01:17:39.553 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 94981 01:17:39.553 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 01:17:39.553 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:17:39.553 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94981 01:17:39.553 killing process with pid 94981 01:17:39.553 Received shutdown signal, test time was about 2.000000 seconds 01:17:39.553 01:17:39.553 Latency(us) 01:17:39.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:17:39.553 =================================================================================================================== 01:17:39.553 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:17:39.553 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:17:39.553 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:17:39.553 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94981' 01:17:39.553 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 94981 01:17:39.553 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 94981 01:17:39.812 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 01:17:39.812 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 01:17:39.812 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 01:17:39.812 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 01:17:39.812 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 01:17:39.812 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 01:17:39.812 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 01:17:39.812 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95041 01:17:39.812 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 01:17:39.812 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95041 /var/tmp/bperf.sock 01:17:39.812 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 95041 ']' 01:17:39.812 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 01:17:39.812 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 01:17:39.812 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:17:39.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:17:39.812 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 01:17:39.812 11:14:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:17:39.812 [2024-07-22 11:14:44.888561] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:17:39.812 [2024-07-22 11:14:44.888789] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 01:17:39.812 Zero copy mechanism will not be used. 01:17:39.812 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95041 ] 01:17:39.812 [2024-07-22 11:14:45.015530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:40.070 [2024-07-22 11:14:45.059745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:17:40.634 11:14:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:17:40.634 11:14:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 01:17:40.634 11:14:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 01:17:40.634 11:14:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 01:17:40.634 11:14:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:17:40.903 [2024-07-22 11:14:45.943655] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:17:40.903 11:14:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:17:40.903 11:14:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:17:41.162 nvme0n1 01:17:41.162 11:14:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 01:17:41.162 11:14:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:17:41.162 I/O size of 131072 is greater than zero copy threshold (65536). 01:17:41.162 Zero copy mechanism will not be used. 01:17:41.162 Running I/O for 2 seconds... 01:17:43.695 01:17:43.695 Latency(us) 01:17:43.695 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:17:43.695 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 01:17:43.695 nvme0n1 : 2.00 6641.25 830.16 0.00 0.00 2405.51 1256.76 3790.03 01:17:43.695 =================================================================================================================== 01:17:43.695 Total : 6641.25 830.16 0.00 0.00 2405.51 1256.76 3790.03 01:17:43.695 0 01:17:43.695 11:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 01:17:43.695 11:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 01:17:43.695 11:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 01:17:43.695 11:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 01:17:43.695 | select(.opcode=="crc32c") 01:17:43.695 | "\(.module_name) \(.executed)"' 01:17:43.695 11:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 01:17:43.695 11:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 01:17:43.695 11:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 01:17:43.695 11:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 01:17:43.695 11:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:17:43.695 11:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95041 01:17:43.695 11:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 95041 ']' 01:17:43.695 11:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 95041 01:17:43.695 11:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 01:17:43.695 11:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:17:43.695 11:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95041 01:17:43.695 killing process with pid 95041 01:17:43.695 Received shutdown signal, test time was about 2.000000 seconds 01:17:43.695 01:17:43.695 Latency(us) 01:17:43.695 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:17:43.695 =================================================================================================================== 01:17:43.695 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:17:43.695 11:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:17:43.695 11:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:17:43.695 11:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95041' 01:17:43.695 11:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 95041 01:17:43.695 11:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 95041 01:17:43.695 11:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 94838 01:17:43.695 11:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 94838 ']' 01:17:43.695 11:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 94838 01:17:43.695 11:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 01:17:43.695 11:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:17:43.695 11:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94838 01:17:43.695 killing process with pid 94838 01:17:43.695 11:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:17:43.695 11:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:17:43.695 11:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94838' 01:17:43.695 11:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 94838 01:17:43.695 11:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 94838 01:17:43.953 01:17:43.953 real 0m17.339s 01:17:43.953 user 0m31.114s 01:17:43.953 sys 0m5.965s 01:17:43.953 11:14:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 01:17:43.953 ************************************ 01:17:43.953 11:14:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:17:43.953 END TEST nvmf_digest_clean 01:17:43.953 ************************************ 01:17:44.210 11:14:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 01:17:44.210 11:14:49 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 01:17:44.210 11:14:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:17:44.210 11:14:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 01:17:44.210 11:14:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 01:17:44.210 ************************************ 01:17:44.210 START TEST nvmf_digest_error 01:17:44.210 ************************************ 01:17:44.210 11:14:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 01:17:44.210 11:14:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 01:17:44.210 11:14:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:17:44.210 11:14:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 01:17:44.210 11:14:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:17:44.210 11:14:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=95123 01:17:44.210 11:14:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 01:17:44.210 11:14:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 95123 01:17:44.210 11:14:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 95123 ']' 01:17:44.210 11:14:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:17:44.210 11:14:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 01:17:44.211 11:14:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:17:44.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:17:44.211 11:14:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 01:17:44.211 11:14:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:17:44.211 [2024-07-22 11:14:49.261588] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:17:44.211 [2024-07-22 11:14:49.261804] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:17:44.211 [2024-07-22 11:14:49.404555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:44.468 [2024-07-22 11:14:49.469947] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:17:44.468 [2024-07-22 11:14:49.470275] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:17:44.468 [2024-07-22 11:14:49.470471] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:17:44.468 [2024-07-22 11:14:49.470527] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:17:44.468 [2024-07-22 11:14:49.470604] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:17:44.468 [2024-07-22 11:14:49.470668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:17:45.034 11:14:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:17:45.034 11:14:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 01:17:45.034 11:14:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:17:45.034 11:14:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 01:17:45.034 11:14:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:17:45.034 11:14:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:17:45.034 11:14:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 01:17:45.034 11:14:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:45.034 11:14:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:17:45.034 [2024-07-22 11:14:50.158198] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 01:17:45.034 11:14:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:45.034 11:14:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 01:17:45.034 11:14:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 01:17:45.034 11:14:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:45.034 11:14:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:17:45.292 [2024-07-22 11:14:50.245639] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:17:45.292 null0 01:17:45.292 [2024-07-22 11:14:50.299394] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:17:45.292 [2024-07-22 11:14:50.323504] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:17:45.292 11:14:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:45.292 11:14:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 01:17:45.292 11:14:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 01:17:45.292 11:14:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 01:17:45.292 11:14:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 01:17:45.292 11:14:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 01:17:45.292 11:14:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95155 01:17:45.292 11:14:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 01:17:45.292 11:14:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95155 /var/tmp/bperf.sock 01:17:45.292 11:14:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 95155 ']' 01:17:45.292 11:14:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 01:17:45.292 11:14:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 01:17:45.292 11:14:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:17:45.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:17:45.292 11:14:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 01:17:45.292 11:14:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:17:45.292 [2024-07-22 11:14:50.378424] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:17:45.292 [2024-07-22 11:14:50.378663] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95155 ] 01:17:45.550 [2024-07-22 11:14:50.514148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:45.550 [2024-07-22 11:14:50.563203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:17:45.550 [2024-07-22 11:14:50.605829] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:17:46.113 11:14:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:17:46.113 11:14:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 01:17:46.113 11:14:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:17:46.113 11:14:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:17:46.382 11:14:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 01:17:46.382 11:14:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:46.382 11:14:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:17:46.382 11:14:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:46.382 11:14:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:17:46.382 11:14:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:17:46.640 nvme0n1 01:17:46.640 11:14:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 01:17:46.640 11:14:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:46.640 11:14:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:17:46.640 11:14:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:46.640 11:14:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 01:17:46.640 11:14:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:17:46.640 Running I/O for 2 seconds... 01:17:46.917 [2024-07-22 11:14:51.867495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:46.917 [2024-07-22 11:14:51.867580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:46.917 [2024-07-22 11:14:51.867595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:46.917 [2024-07-22 11:14:51.881030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:46.917 [2024-07-22 11:14:51.881078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:46.917 [2024-07-22 11:14:51.881090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:46.917 [2024-07-22 11:14:51.894514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:46.917 [2024-07-22 11:14:51.894566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:46.917 [2024-07-22 11:14:51.894578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:46.917 [2024-07-22 11:14:51.907931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:46.917 [2024-07-22 11:14:51.907965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:46.917 [2024-07-22 11:14:51.907976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:46.917 [2024-07-22 11:14:51.921317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:46.917 [2024-07-22 11:14:51.921352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:46.917 [2024-07-22 11:14:51.921363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:46.917 [2024-07-22 11:14:51.935016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:46.917 [2024-07-22 11:14:51.935051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:46.917 [2024-07-22 11:14:51.935063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:46.917 [2024-07-22 11:14:51.948252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:46.917 [2024-07-22 11:14:51.948287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:46.917 [2024-07-22 11:14:51.948300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:46.917 [2024-07-22 11:14:51.961596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:46.917 [2024-07-22 11:14:51.961631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:46.917 [2024-07-22 11:14:51.961643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:46.917 [2024-07-22 11:14:51.975016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:46.917 [2024-07-22 11:14:51.975050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:46.917 [2024-07-22 11:14:51.975061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:46.917 [2024-07-22 11:14:51.988597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:46.917 [2024-07-22 11:14:51.988644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:46.917 [2024-07-22 11:14:51.988657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:46.917 [2024-07-22 11:14:52.002068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:46.917 [2024-07-22 11:14:52.002108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:46.917 [2024-07-22 11:14:52.002120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:46.917 [2024-07-22 11:14:52.015481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:46.917 [2024-07-22 11:14:52.015522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:46.917 [2024-07-22 11:14:52.015534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:46.917 [2024-07-22 11:14:52.029032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:46.917 [2024-07-22 11:14:52.029070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:46.917 [2024-07-22 11:14:52.029081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:46.917 [2024-07-22 11:14:52.042495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:46.917 [2024-07-22 11:14:52.042538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:46.917 [2024-07-22 11:14:52.042549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:46.917 [2024-07-22 11:14:52.055988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:46.917 [2024-07-22 11:14:52.056023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:46.917 [2024-07-22 11:14:52.056034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:46.917 [2024-07-22 11:14:52.069349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:46.917 [2024-07-22 11:14:52.069384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:46.917 [2024-07-22 11:14:52.069396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:46.917 [2024-07-22 11:14:52.082835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:46.918 [2024-07-22 11:14:52.082882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:46.918 [2024-07-22 11:14:52.082893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:46.918 [2024-07-22 11:14:52.096179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:46.918 [2024-07-22 11:14:52.096217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:46.918 [2024-07-22 11:14:52.096228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:46.918 [2024-07-22 11:14:52.109595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:46.918 [2024-07-22 11:14:52.109632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:46.918 [2024-07-22 11:14:52.109660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.177 [2024-07-22 11:14:52.123346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.177 [2024-07-22 11:14:52.123390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.177 [2024-07-22 11:14:52.123403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.177 [2024-07-22 11:14:52.137364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.177 [2024-07-22 11:14:52.137404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.177 [2024-07-22 11:14:52.137416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.177 [2024-07-22 11:14:52.151223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.177 [2024-07-22 11:14:52.151261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.177 [2024-07-22 11:14:52.151272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.178 [2024-07-22 11:14:52.164472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.178 [2024-07-22 11:14:52.164508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.178 [2024-07-22 11:14:52.164520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.178 [2024-07-22 11:14:52.177794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.178 [2024-07-22 11:14:52.177829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.178 [2024-07-22 11:14:52.177840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.178 [2024-07-22 11:14:52.191110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.178 [2024-07-22 11:14:52.191147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.178 [2024-07-22 11:14:52.191158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.178 [2024-07-22 11:14:52.204521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.178 [2024-07-22 11:14:52.204557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.178 [2024-07-22 11:14:52.204569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.178 [2024-07-22 11:14:52.217771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.178 [2024-07-22 11:14:52.217807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.178 [2024-07-22 11:14:52.217819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.178 [2024-07-22 11:14:52.231207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.178 [2024-07-22 11:14:52.231241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.178 [2024-07-22 11:14:52.231252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.178 [2024-07-22 11:14:52.244507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.178 [2024-07-22 11:14:52.244542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.178 [2024-07-22 11:14:52.244553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.178 [2024-07-22 11:14:52.258169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.178 [2024-07-22 11:14:52.258206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.178 [2024-07-22 11:14:52.258218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.178 [2024-07-22 11:14:52.272037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.178 [2024-07-22 11:14:52.272079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.178 [2024-07-22 11:14:52.272092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.178 [2024-07-22 11:14:52.286110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.178 [2024-07-22 11:14:52.286155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.178 [2024-07-22 11:14:52.286171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.178 [2024-07-22 11:14:52.299571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.178 [2024-07-22 11:14:52.299610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.178 [2024-07-22 11:14:52.299622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.178 [2024-07-22 11:14:52.312957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.178 [2024-07-22 11:14:52.312997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.178 [2024-07-22 11:14:52.313011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.178 [2024-07-22 11:14:52.326458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.178 [2024-07-22 11:14:52.326497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.178 [2024-07-22 11:14:52.326510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.178 [2024-07-22 11:14:52.339926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.178 [2024-07-22 11:14:52.339964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.178 [2024-07-22 11:14:52.339975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.178 [2024-07-22 11:14:52.353407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.178 [2024-07-22 11:14:52.353451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.178 [2024-07-22 11:14:52.353463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.178 [2024-07-22 11:14:52.366975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.178 [2024-07-22 11:14:52.367015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.178 [2024-07-22 11:14:52.367032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.178 [2024-07-22 11:14:52.380394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.178 [2024-07-22 11:14:52.380433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.178 [2024-07-22 11:14:52.380445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.438 [2024-07-22 11:14:52.393958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.438 [2024-07-22 11:14:52.393996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.438 [2024-07-22 11:14:52.394008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.438 [2024-07-22 11:14:52.407383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.438 [2024-07-22 11:14:52.407423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.438 [2024-07-22 11:14:52.407435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.438 [2024-07-22 11:14:52.420803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.438 [2024-07-22 11:14:52.420839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.438 [2024-07-22 11:14:52.420863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.438 [2024-07-22 11:14:52.434228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.438 [2024-07-22 11:14:52.434264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.438 [2024-07-22 11:14:52.434276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.438 [2024-07-22 11:14:52.447420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.438 [2024-07-22 11:14:52.447453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.438 [2024-07-22 11:14:52.447464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.438 [2024-07-22 11:14:52.460757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.438 [2024-07-22 11:14:52.460790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.438 [2024-07-22 11:14:52.460802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.438 [2024-07-22 11:14:52.474220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.438 [2024-07-22 11:14:52.474265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.438 [2024-07-22 11:14:52.474277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.438 [2024-07-22 11:14:52.487609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.438 [2024-07-22 11:14:52.487645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.438 [2024-07-22 11:14:52.487656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.438 [2024-07-22 11:14:52.501085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.438 [2024-07-22 11:14:52.501121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.438 [2024-07-22 11:14:52.501133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.438 [2024-07-22 11:14:52.514667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.438 [2024-07-22 11:14:52.514705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.438 [2024-07-22 11:14:52.514716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.438 [2024-07-22 11:14:52.528096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.438 [2024-07-22 11:14:52.528132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.438 [2024-07-22 11:14:52.528143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.438 [2024-07-22 11:14:52.541383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.438 [2024-07-22 11:14:52.541420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.438 [2024-07-22 11:14:52.541431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.438 [2024-07-22 11:14:52.554844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.438 [2024-07-22 11:14:52.554894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.438 [2024-07-22 11:14:52.554906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.438 [2024-07-22 11:14:52.568178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.438 [2024-07-22 11:14:52.568216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.438 [2024-07-22 11:14:52.568228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.438 [2024-07-22 11:14:52.581452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.438 [2024-07-22 11:14:52.581495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.438 [2024-07-22 11:14:52.581507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.438 [2024-07-22 11:14:52.594844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.438 [2024-07-22 11:14:52.594887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.438 [2024-07-22 11:14:52.594898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.438 [2024-07-22 11:14:52.608230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.438 [2024-07-22 11:14:52.608267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.438 [2024-07-22 11:14:52.608279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.438 [2024-07-22 11:14:52.621641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.438 [2024-07-22 11:14:52.621677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.438 [2024-07-22 11:14:52.621689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.438 [2024-07-22 11:14:52.635014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.438 [2024-07-22 11:14:52.635048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.438 [2024-07-22 11:14:52.635059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.698 [2024-07-22 11:14:52.648373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.698 [2024-07-22 11:14:52.648405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.698 [2024-07-22 11:14:52.648417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.698 [2024-07-22 11:14:52.661682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.698 [2024-07-22 11:14:52.661716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.698 [2024-07-22 11:14:52.661727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.698 [2024-07-22 11:14:52.675140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.698 [2024-07-22 11:14:52.675171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.698 [2024-07-22 11:14:52.675182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.698 [2024-07-22 11:14:52.688451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.698 [2024-07-22 11:14:52.688484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.698 [2024-07-22 11:14:52.688495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.698 [2024-07-22 11:14:52.701837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.698 [2024-07-22 11:14:52.701878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.698 [2024-07-22 11:14:52.701890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.698 [2024-07-22 11:14:52.721040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.698 [2024-07-22 11:14:52.721078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.698 [2024-07-22 11:14:52.721090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.698 [2024-07-22 11:14:52.734488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.698 [2024-07-22 11:14:52.734536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.698 [2024-07-22 11:14:52.734549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.698 [2024-07-22 11:14:52.747709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.698 [2024-07-22 11:14:52.747745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.698 [2024-07-22 11:14:52.747756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.698 [2024-07-22 11:14:52.761115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.698 [2024-07-22 11:14:52.761149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.698 [2024-07-22 11:14:52.761160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.698 [2024-07-22 11:14:52.774626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.698 [2024-07-22 11:14:52.774662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.698 [2024-07-22 11:14:52.774674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.698 [2024-07-22 11:14:52.787989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.698 [2024-07-22 11:14:52.788022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.698 [2024-07-22 11:14:52.788034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.698 [2024-07-22 11:14:52.801447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.698 [2024-07-22 11:14:52.801481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.698 [2024-07-22 11:14:52.801492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.698 [2024-07-22 11:14:52.814861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.698 [2024-07-22 11:14:52.814909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.698 [2024-07-22 11:14:52.814921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.698 [2024-07-22 11:14:52.828236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.698 [2024-07-22 11:14:52.828276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.699 [2024-07-22 11:14:52.828287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.699 [2024-07-22 11:14:52.841517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.699 [2024-07-22 11:14:52.841556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.699 [2024-07-22 11:14:52.841568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.699 [2024-07-22 11:14:52.854977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.699 [2024-07-22 11:14:52.855014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.699 [2024-07-22 11:14:52.855026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.699 [2024-07-22 11:14:52.868275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.699 [2024-07-22 11:14:52.868310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.699 [2024-07-22 11:14:52.868322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.699 [2024-07-22 11:14:52.881577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.699 [2024-07-22 11:14:52.881641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.699 [2024-07-22 11:14:52.881653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.699 [2024-07-22 11:14:52.895137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.699 [2024-07-22 11:14:52.895171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.699 [2024-07-22 11:14:52.895183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.956 [2024-07-22 11:14:52.908465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.956 [2024-07-22 11:14:52.908501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.956 [2024-07-22 11:14:52.908512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.956 [2024-07-22 11:14:52.921948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.956 [2024-07-22 11:14:52.922007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.956 [2024-07-22 11:14:52.922018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.956 [2024-07-22 11:14:52.935276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.956 [2024-07-22 11:14:52.935316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.956 [2024-07-22 11:14:52.935328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.956 [2024-07-22 11:14:52.948534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.956 [2024-07-22 11:14:52.948569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.956 [2024-07-22 11:14:52.948581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.956 [2024-07-22 11:14:52.961989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.956 [2024-07-22 11:14:52.962025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.956 [2024-07-22 11:14:52.962036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.956 [2024-07-22 11:14:52.975834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.956 [2024-07-22 11:14:52.975883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.956 [2024-07-22 11:14:52.975895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.956 [2024-07-22 11:14:52.989388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.956 [2024-07-22 11:14:52.989432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.956 [2024-07-22 11:14:52.989451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.956 [2024-07-22 11:14:53.002774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.956 [2024-07-22 11:14:53.002815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.956 [2024-07-22 11:14:53.002827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.956 [2024-07-22 11:14:53.016245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.956 [2024-07-22 11:14:53.016285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.956 [2024-07-22 11:14:53.016296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.956 [2024-07-22 11:14:53.029675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.956 [2024-07-22 11:14:53.029718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.956 [2024-07-22 11:14:53.029730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.956 [2024-07-22 11:14:53.043118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.956 [2024-07-22 11:14:53.043152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.957 [2024-07-22 11:14:53.043164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.957 [2024-07-22 11:14:53.056472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.957 [2024-07-22 11:14:53.056506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.957 [2024-07-22 11:14:53.056518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.957 [2024-07-22 11:14:53.069801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.957 [2024-07-22 11:14:53.069871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.957 [2024-07-22 11:14:53.069883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.957 [2024-07-22 11:14:53.083271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.957 [2024-07-22 11:14:53.083308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.957 [2024-07-22 11:14:53.083319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.957 [2024-07-22 11:14:53.096527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.957 [2024-07-22 11:14:53.096563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.957 [2024-07-22 11:14:53.096575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.957 [2024-07-22 11:14:53.109993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.957 [2024-07-22 11:14:53.110027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.957 [2024-07-22 11:14:53.110040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.957 [2024-07-22 11:14:53.123521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.957 [2024-07-22 11:14:53.123557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.957 [2024-07-22 11:14:53.123569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.957 [2024-07-22 11:14:53.137145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.957 [2024-07-22 11:14:53.137180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.957 [2024-07-22 11:14:53.137192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:47.957 [2024-07-22 11:14:53.151305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:47.957 [2024-07-22 11:14:53.151372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:47.957 [2024-07-22 11:14:53.151387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.225 [2024-07-22 11:14:53.165273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.226 [2024-07-22 11:14:53.165319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.226 [2024-07-22 11:14:53.165331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.226 [2024-07-22 11:14:53.178760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.226 [2024-07-22 11:14:53.178800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.226 [2024-07-22 11:14:53.178813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.226 [2024-07-22 11:14:53.192153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.226 [2024-07-22 11:14:53.192189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.226 [2024-07-22 11:14:53.192201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.226 [2024-07-22 11:14:53.205648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.226 [2024-07-22 11:14:53.205689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.226 [2024-07-22 11:14:53.205701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.226 [2024-07-22 11:14:53.219118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.226 [2024-07-22 11:14:53.219160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.226 [2024-07-22 11:14:53.219172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.226 [2024-07-22 11:14:53.232679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.226 [2024-07-22 11:14:53.232730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.226 [2024-07-22 11:14:53.232742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.226 [2024-07-22 11:14:53.246077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.226 [2024-07-22 11:14:53.246124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.226 [2024-07-22 11:14:53.246148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.226 [2024-07-22 11:14:53.259605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.226 [2024-07-22 11:14:53.259647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.226 [2024-07-22 11:14:53.259659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.226 [2024-07-22 11:14:53.272917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.226 [2024-07-22 11:14:53.272958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.226 [2024-07-22 11:14:53.272970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.226 [2024-07-22 11:14:53.286285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.226 [2024-07-22 11:14:53.286324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.226 [2024-07-22 11:14:53.286337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.226 [2024-07-22 11:14:53.299528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.226 [2024-07-22 11:14:53.299563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.226 [2024-07-22 11:14:53.299575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.226 [2024-07-22 11:14:53.312889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.226 [2024-07-22 11:14:53.312924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.226 [2024-07-22 11:14:53.312935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.226 [2024-07-22 11:14:53.326165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.226 [2024-07-22 11:14:53.326199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.226 [2024-07-22 11:14:53.326210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.226 [2024-07-22 11:14:53.339609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.226 [2024-07-22 11:14:53.339647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.226 [2024-07-22 11:14:53.339658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.226 [2024-07-22 11:14:53.353076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.226 [2024-07-22 11:14:53.353111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.226 [2024-07-22 11:14:53.353123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.226 [2024-07-22 11:14:53.366443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.226 [2024-07-22 11:14:53.366478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.226 [2024-07-22 11:14:53.366489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.226 [2024-07-22 11:14:53.379920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.226 [2024-07-22 11:14:53.379954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.226 [2024-07-22 11:14:53.379966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.226 [2024-07-22 11:14:53.393311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.226 [2024-07-22 11:14:53.393344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.226 [2024-07-22 11:14:53.393355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.226 [2024-07-22 11:14:53.406889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.226 [2024-07-22 11:14:53.406928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.226 [2024-07-22 11:14:53.406940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.226 [2024-07-22 11:14:53.420537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.226 [2024-07-22 11:14:53.420575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.226 [2024-07-22 11:14:53.420586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.485 [2024-07-22 11:14:53.433961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.485 [2024-07-22 11:14:53.433996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.485 [2024-07-22 11:14:53.434008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.485 [2024-07-22 11:14:53.447498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.485 [2024-07-22 11:14:53.447537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.485 [2024-07-22 11:14:53.447549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.485 [2024-07-22 11:14:53.460755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.485 [2024-07-22 11:14:53.460795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.485 [2024-07-22 11:14:53.460807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.485 [2024-07-22 11:14:53.474159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.485 [2024-07-22 11:14:53.474201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.485 [2024-07-22 11:14:53.474220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.485 [2024-07-22 11:14:53.487603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.485 [2024-07-22 11:14:53.487648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.485 [2024-07-22 11:14:53.487660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.485 [2024-07-22 11:14:53.500899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.485 [2024-07-22 11:14:53.500940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.485 [2024-07-22 11:14:53.500953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.485 [2024-07-22 11:14:53.514506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.485 [2024-07-22 11:14:53.514546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.485 [2024-07-22 11:14:53.514558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.485 [2024-07-22 11:14:53.528054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.485 [2024-07-22 11:14:53.528092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.485 [2024-07-22 11:14:53.528104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.485 [2024-07-22 11:14:53.541512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.485 [2024-07-22 11:14:53.541575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.485 [2024-07-22 11:14:53.541587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.485 [2024-07-22 11:14:53.554917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.485 [2024-07-22 11:14:53.554952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.485 [2024-07-22 11:14:53.554964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.485 [2024-07-22 11:14:53.568350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.486 [2024-07-22 11:14:53.568383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.486 [2024-07-22 11:14:53.568395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.486 [2024-07-22 11:14:53.587474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.486 [2024-07-22 11:14:53.587513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.486 [2024-07-22 11:14:53.587525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.486 [2024-07-22 11:14:53.600935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.486 [2024-07-22 11:14:53.600970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.486 [2024-07-22 11:14:53.600981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.486 [2024-07-22 11:14:53.614328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.486 [2024-07-22 11:14:53.614365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.486 [2024-07-22 11:14:53.614376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.486 [2024-07-22 11:14:53.627845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.486 [2024-07-22 11:14:53.627890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.486 [2024-07-22 11:14:53.627901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.486 [2024-07-22 11:14:53.641209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.486 [2024-07-22 11:14:53.641244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.486 [2024-07-22 11:14:53.641255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.486 [2024-07-22 11:14:53.654715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.486 [2024-07-22 11:14:53.654754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.486 [2024-07-22 11:14:53.654765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.486 [2024-07-22 11:14:53.668100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.486 [2024-07-22 11:14:53.668136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.486 [2024-07-22 11:14:53.668147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.486 [2024-07-22 11:14:53.681423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.486 [2024-07-22 11:14:53.681474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.486 [2024-07-22 11:14:53.681488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.745 [2024-07-22 11:14:53.694769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.745 [2024-07-22 11:14:53.694807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.745 [2024-07-22 11:14:53.694818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.745 [2024-07-22 11:14:53.708116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.745 [2024-07-22 11:14:53.708151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.745 [2024-07-22 11:14:53.708162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.745 [2024-07-22 11:14:53.721547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.745 [2024-07-22 11:14:53.721582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.745 [2024-07-22 11:14:53.721593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.745 [2024-07-22 11:14:53.734972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.745 [2024-07-22 11:14:53.735010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.745 [2024-07-22 11:14:53.735022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.745 [2024-07-22 11:14:53.748331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.745 [2024-07-22 11:14:53.748370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.745 [2024-07-22 11:14:53.748383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.745 [2024-07-22 11:14:53.761589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.745 [2024-07-22 11:14:53.761626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.745 [2024-07-22 11:14:53.761638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.745 [2024-07-22 11:14:53.774963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.745 [2024-07-22 11:14:53.774997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.745 [2024-07-22 11:14:53.775010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.745 [2024-07-22 11:14:53.788252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.745 [2024-07-22 11:14:53.788287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.745 [2024-07-22 11:14:53.788298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.745 [2024-07-22 11:14:53.801651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.745 [2024-07-22 11:14:53.801685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.745 [2024-07-22 11:14:53.801697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.745 [2024-07-22 11:14:53.814919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.745 [2024-07-22 11:14:53.814952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.745 [2024-07-22 11:14:53.814963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.745 [2024-07-22 11:14:53.828218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.745 [2024-07-22 11:14:53.828252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.745 [2024-07-22 11:14:53.828263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.745 [2024-07-22 11:14:53.841331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa16690) 01:17:48.745 [2024-07-22 11:14:53.841368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:48.745 [2024-07-22 11:14:53.841379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:48.745 01:17:48.745 Latency(us) 01:17:48.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:17:48.745 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 01:17:48.745 nvme0n1 : 2.00 18760.85 73.28 0.00 0.00 6818.89 6290.40 26003.84 01:17:48.745 =================================================================================================================== 01:17:48.745 Total : 18760.85 73.28 0.00 0.00 6818.89 6290.40 26003.84 01:17:48.745 0 01:17:48.745 11:14:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 01:17:48.745 11:14:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 01:17:48.745 11:14:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 01:17:48.745 11:14:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 01:17:48.745 | .driver_specific 01:17:48.745 | .nvme_error 01:17:48.745 | .status_code 01:17:48.745 | .command_transient_transport_error' 01:17:49.004 11:14:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 147 > 0 )) 01:17:49.004 11:14:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95155 01:17:49.004 11:14:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 95155 ']' 01:17:49.004 11:14:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 95155 01:17:49.004 11:14:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 01:17:49.004 11:14:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:17:49.004 11:14:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95155 01:17:49.004 killing process with pid 95155 01:17:49.004 Received shutdown signal, test time was about 2.000000 seconds 01:17:49.004 01:17:49.004 Latency(us) 01:17:49.004 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:17:49.004 =================================================================================================================== 01:17:49.004 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:17:49.004 11:14:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:17:49.004 11:14:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:17:49.004 11:14:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95155' 01:17:49.004 11:14:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 95155 01:17:49.004 11:14:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 95155 01:17:49.262 11:14:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 01:17:49.262 11:14:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 01:17:49.262 11:14:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 01:17:49.262 11:14:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 01:17:49.262 11:14:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 01:17:49.262 11:14:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95212 01:17:49.262 11:14:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95212 /var/tmp/bperf.sock 01:17:49.262 11:14:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 01:17:49.262 11:14:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 95212 ']' 01:17:49.262 11:14:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 01:17:49.262 11:14:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 01:17:49.262 11:14:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:17:49.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:17:49.262 11:14:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 01:17:49.262 11:14:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:17:49.262 [2024-07-22 11:14:54.459218] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:17:49.262 [2024-07-22 11:14:54.459503] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 01:17:49.262 Zero copy mechanism will not be used. 01:17:49.262 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95212 ] 01:17:49.520 [2024-07-22 11:14:54.588218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:49.520 [2024-07-22 11:14:54.660402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:17:49.779 [2024-07-22 11:14:54.732639] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:17:50.347 11:14:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:17:50.347 11:14:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 01:17:50.347 11:14:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:17:50.347 11:14:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:17:50.347 11:14:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 01:17:50.347 11:14:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:50.347 11:14:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:17:50.347 11:14:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:50.347 11:14:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:17:50.347 11:14:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:17:50.606 nvme0n1 01:17:50.606 11:14:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 01:17:50.606 11:14:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:50.606 11:14:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:17:50.606 11:14:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:50.606 11:14:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 01:17:50.606 11:14:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:17:50.866 I/O size of 131072 is greater than zero copy threshold (65536). 01:17:50.866 Zero copy mechanism will not be used. 01:17:50.866 Running I/O for 2 seconds... 01:17:50.866 [2024-07-22 11:14:55.881828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.866 [2024-07-22 11:14:55.881908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.866 [2024-07-22 11:14:55.881923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:50.866 [2024-07-22 11:14:55.886384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.866 [2024-07-22 11:14:55.886425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.866 [2024-07-22 11:14:55.886438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:50.866 [2024-07-22 11:14:55.890929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.866 [2024-07-22 11:14:55.890959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.866 [2024-07-22 11:14:55.890970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:50.866 [2024-07-22 11:14:55.895400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.866 [2024-07-22 11:14:55.895432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.866 [2024-07-22 11:14:55.895444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:50.866 [2024-07-22 11:14:55.899888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.866 [2024-07-22 11:14:55.899917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.866 [2024-07-22 11:14:55.899929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:50.866 [2024-07-22 11:14:55.904271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.866 [2024-07-22 11:14:55.904302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.866 [2024-07-22 11:14:55.904313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:50.866 [2024-07-22 11:14:55.908762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.866 [2024-07-22 11:14:55.908793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.866 [2024-07-22 11:14:55.908804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:50.866 [2024-07-22 11:14:55.913250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.866 [2024-07-22 11:14:55.913282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.866 [2024-07-22 11:14:55.913293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:50.866 [2024-07-22 11:14:55.917772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.866 [2024-07-22 11:14:55.917820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.866 [2024-07-22 11:14:55.917832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:50.866 [2024-07-22 11:14:55.922350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.866 [2024-07-22 11:14:55.922383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.866 [2024-07-22 11:14:55.922394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:50.866 [2024-07-22 11:14:55.926837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.866 [2024-07-22 11:14:55.926878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.866 [2024-07-22 11:14:55.926890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:50.866 [2024-07-22 11:14:55.931336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.866 [2024-07-22 11:14:55.931367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.866 [2024-07-22 11:14:55.931378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:50.866 [2024-07-22 11:14:55.935796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.867 [2024-07-22 11:14:55.935828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.867 [2024-07-22 11:14:55.935839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:50.867 [2024-07-22 11:14:55.940287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.867 [2024-07-22 11:14:55.940319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.867 [2024-07-22 11:14:55.940330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:50.867 [2024-07-22 11:14:55.944812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.867 [2024-07-22 11:14:55.944843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.867 [2024-07-22 11:14:55.944866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:50.867 [2024-07-22 11:14:55.949215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.867 [2024-07-22 11:14:55.949246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.867 [2024-07-22 11:14:55.949256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:50.867 [2024-07-22 11:14:55.953737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.867 [2024-07-22 11:14:55.953769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.867 [2024-07-22 11:14:55.953780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:50.867 [2024-07-22 11:14:55.958378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.867 [2024-07-22 11:14:55.958410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.867 [2024-07-22 11:14:55.958421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:50.867 [2024-07-22 11:14:55.962833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.867 [2024-07-22 11:14:55.962875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.867 [2024-07-22 11:14:55.962886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:50.867 [2024-07-22 11:14:55.967371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.867 [2024-07-22 11:14:55.967402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.867 [2024-07-22 11:14:55.967413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:50.867 [2024-07-22 11:14:55.971835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.867 [2024-07-22 11:14:55.971873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.867 [2024-07-22 11:14:55.971885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:50.867 [2024-07-22 11:14:55.976307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.867 [2024-07-22 11:14:55.976338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.867 [2024-07-22 11:14:55.976349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:50.867 [2024-07-22 11:14:55.980741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.867 [2024-07-22 11:14:55.980773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.867 [2024-07-22 11:14:55.980783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:50.867 [2024-07-22 11:14:55.985302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.867 [2024-07-22 11:14:55.985335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.867 [2024-07-22 11:14:55.985346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:50.867 [2024-07-22 11:14:55.989828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.867 [2024-07-22 11:14:55.989875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.867 [2024-07-22 11:14:55.989887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:50.867 [2024-07-22 11:14:55.994335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.867 [2024-07-22 11:14:55.994368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.867 [2024-07-22 11:14:55.994379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:50.867 [2024-07-22 11:14:55.998771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.867 [2024-07-22 11:14:55.998802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.867 [2024-07-22 11:14:55.998814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:50.867 [2024-07-22 11:14:56.003202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.867 [2024-07-22 11:14:56.003234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.867 [2024-07-22 11:14:56.003245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:50.867 [2024-07-22 11:14:56.007633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.867 [2024-07-22 11:14:56.007664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.867 [2024-07-22 11:14:56.007675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:50.867 [2024-07-22 11:14:56.012139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.867 [2024-07-22 11:14:56.012171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.867 [2024-07-22 11:14:56.012182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:50.867 [2024-07-22 11:14:56.016575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.867 [2024-07-22 11:14:56.016606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.867 [2024-07-22 11:14:56.016617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:50.867 [2024-07-22 11:14:56.021012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.867 [2024-07-22 11:14:56.021042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.867 [2024-07-22 11:14:56.021053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:50.867 [2024-07-22 11:14:56.025422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.867 [2024-07-22 11:14:56.025461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.867 [2024-07-22 11:14:56.025472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:50.867 [2024-07-22 11:14:56.029817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.867 [2024-07-22 11:14:56.029861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.867 [2024-07-22 11:14:56.029872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:50.867 [2024-07-22 11:14:56.034260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.867 [2024-07-22 11:14:56.034291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.867 [2024-07-22 11:14:56.034302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:50.867 [2024-07-22 11:14:56.038771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.867 [2024-07-22 11:14:56.038803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.867 [2024-07-22 11:14:56.038813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:50.867 [2024-07-22 11:14:56.043255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.867 [2024-07-22 11:14:56.043286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.867 [2024-07-22 11:14:56.043297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:50.867 [2024-07-22 11:14:56.047696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.867 [2024-07-22 11:14:56.047727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.867 [2024-07-22 11:14:56.047738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:50.867 [2024-07-22 11:14:56.052152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.867 [2024-07-22 11:14:56.052183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.867 [2024-07-22 11:14:56.052194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:50.867 [2024-07-22 11:14:56.056555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.867 [2024-07-22 11:14:56.056586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.867 [2024-07-22 11:14:56.056597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:50.867 [2024-07-22 11:14:56.061001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.867 [2024-07-22 11:14:56.061030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.867 [2024-07-22 11:14:56.061041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:50.867 [2024-07-22 11:14:56.065339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.868 [2024-07-22 11:14:56.065370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.868 [2024-07-22 11:14:56.065381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:50.868 [2024-07-22 11:14:56.069937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:50.868 [2024-07-22 11:14:56.069967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:50.868 [2024-07-22 11:14:56.069978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.128 [2024-07-22 11:14:56.074459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.128 [2024-07-22 11:14:56.074491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.128 [2024-07-22 11:14:56.074503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.128 [2024-07-22 11:14:56.078907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.128 [2024-07-22 11:14:56.078936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.128 [2024-07-22 11:14:56.078947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.128 [2024-07-22 11:14:56.083330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.128 [2024-07-22 11:14:56.083361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.128 [2024-07-22 11:14:56.083372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.128 [2024-07-22 11:14:56.087684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.128 [2024-07-22 11:14:56.087715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.128 [2024-07-22 11:14:56.087726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.128 [2024-07-22 11:14:56.092159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.128 [2024-07-22 11:14:56.092191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.128 [2024-07-22 11:14:56.092201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.128 [2024-07-22 11:14:56.096596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.128 [2024-07-22 11:14:56.096628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.128 [2024-07-22 11:14:56.096640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.128 [2024-07-22 11:14:56.101113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.128 [2024-07-22 11:14:56.101145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.128 [2024-07-22 11:14:56.101156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.128 [2024-07-22 11:14:56.105540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.128 [2024-07-22 11:14:56.105571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.128 [2024-07-22 11:14:56.105582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.128 [2024-07-22 11:14:56.110019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.128 [2024-07-22 11:14:56.110049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.128 [2024-07-22 11:14:56.110060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.128 [2024-07-22 11:14:56.114367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.128 [2024-07-22 11:14:56.114400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.128 [2024-07-22 11:14:56.114411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.128 [2024-07-22 11:14:56.118797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.128 [2024-07-22 11:14:56.118829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.128 [2024-07-22 11:14:56.118840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.128 [2024-07-22 11:14:56.123277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.128 [2024-07-22 11:14:56.123308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.128 [2024-07-22 11:14:56.123319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.128 [2024-07-22 11:14:56.127768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.128 [2024-07-22 11:14:56.127799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.128 [2024-07-22 11:14:56.127810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.128 [2024-07-22 11:14:56.132294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.128 [2024-07-22 11:14:56.132325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.128 [2024-07-22 11:14:56.132336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.128 [2024-07-22 11:14:56.136769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.128 [2024-07-22 11:14:56.136799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.128 [2024-07-22 11:14:56.136810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.128 [2024-07-22 11:14:56.141144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.128 [2024-07-22 11:14:56.141175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.128 [2024-07-22 11:14:56.141186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.128 [2024-07-22 11:14:56.145735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.128 [2024-07-22 11:14:56.145768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.128 [2024-07-22 11:14:56.145779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.128 [2024-07-22 11:14:56.150216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.128 [2024-07-22 11:14:56.150248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.128 [2024-07-22 11:14:56.150259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.128 [2024-07-22 11:14:56.154684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.128 [2024-07-22 11:14:56.154715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.128 [2024-07-22 11:14:56.154725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.128 [2024-07-22 11:14:56.159113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.128 [2024-07-22 11:14:56.159144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.128 [2024-07-22 11:14:56.159155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.128 [2024-07-22 11:14:56.163521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.128 [2024-07-22 11:14:56.163552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.128 [2024-07-22 11:14:56.163563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.128 [2024-07-22 11:14:56.167981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.128 [2024-07-22 11:14:56.168011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.128 [2024-07-22 11:14:56.168021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.129 [2024-07-22 11:14:56.172501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.129 [2024-07-22 11:14:56.172533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.129 [2024-07-22 11:14:56.172543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.129 [2024-07-22 11:14:56.177027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.129 [2024-07-22 11:14:56.177056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.129 [2024-07-22 11:14:56.177067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.129 [2024-07-22 11:14:56.181485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.129 [2024-07-22 11:14:56.181516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.129 [2024-07-22 11:14:56.181543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.129 [2024-07-22 11:14:56.186044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.129 [2024-07-22 11:14:56.186074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.129 [2024-07-22 11:14:56.186085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.129 [2024-07-22 11:14:56.190518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.129 [2024-07-22 11:14:56.190551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.129 [2024-07-22 11:14:56.190561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.129 [2024-07-22 11:14:56.194961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.129 [2024-07-22 11:14:56.194990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.129 [2024-07-22 11:14:56.195001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.129 [2024-07-22 11:14:56.199372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.129 [2024-07-22 11:14:56.199403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.129 [2024-07-22 11:14:56.199414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.129 [2024-07-22 11:14:56.203841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.129 [2024-07-22 11:14:56.203879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.129 [2024-07-22 11:14:56.203890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.129 [2024-07-22 11:14:56.208292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.129 [2024-07-22 11:14:56.208325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.129 [2024-07-22 11:14:56.208336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.129 [2024-07-22 11:14:56.212783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.129 [2024-07-22 11:14:56.212817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.129 [2024-07-22 11:14:56.212828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.129 [2024-07-22 11:14:56.217231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.129 [2024-07-22 11:14:56.217262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.129 [2024-07-22 11:14:56.217273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.129 [2024-07-22 11:14:56.221725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.129 [2024-07-22 11:14:56.221757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.129 [2024-07-22 11:14:56.221768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.129 [2024-07-22 11:14:56.226194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.129 [2024-07-22 11:14:56.226226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.129 [2024-07-22 11:14:56.226237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.129 [2024-07-22 11:14:56.230749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.129 [2024-07-22 11:14:56.230780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.129 [2024-07-22 11:14:56.230791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.129 [2024-07-22 11:14:56.235328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.129 [2024-07-22 11:14:56.235361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.129 [2024-07-22 11:14:56.235372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.129 [2024-07-22 11:14:56.239933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.129 [2024-07-22 11:14:56.239964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.129 [2024-07-22 11:14:56.239976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.129 [2024-07-22 11:14:56.244539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.129 [2024-07-22 11:14:56.244574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.129 [2024-07-22 11:14:56.244586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.129 [2024-07-22 11:14:56.249246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.129 [2024-07-22 11:14:56.249278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.129 [2024-07-22 11:14:56.249290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.129 [2024-07-22 11:14:56.253891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.129 [2024-07-22 11:14:56.253922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.129 [2024-07-22 11:14:56.253933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.129 [2024-07-22 11:14:56.258486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.129 [2024-07-22 11:14:56.258531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.129 [2024-07-22 11:14:56.258542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.129 [2024-07-22 11:14:56.263062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.129 [2024-07-22 11:14:56.263093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.129 [2024-07-22 11:14:56.263104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.129 [2024-07-22 11:14:56.267570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.129 [2024-07-22 11:14:56.267601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.129 [2024-07-22 11:14:56.267612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.129 [2024-07-22 11:14:56.272037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.129 [2024-07-22 11:14:56.272067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.129 [2024-07-22 11:14:56.272078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.129 [2024-07-22 11:14:56.276564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.129 [2024-07-22 11:14:56.276596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.129 [2024-07-22 11:14:56.276606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.129 [2024-07-22 11:14:56.281100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.129 [2024-07-22 11:14:56.281132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.129 [2024-07-22 11:14:56.281143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.129 [2024-07-22 11:14:56.285575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.129 [2024-07-22 11:14:56.285606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.129 [2024-07-22 11:14:56.285618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.129 [2024-07-22 11:14:56.290081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.129 [2024-07-22 11:14:56.290112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.129 [2024-07-22 11:14:56.290123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.129 [2024-07-22 11:14:56.294518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.129 [2024-07-22 11:14:56.294548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.129 [2024-07-22 11:14:56.294560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.129 [2024-07-22 11:14:56.298949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.129 [2024-07-22 11:14:56.298978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.130 [2024-07-22 11:14:56.298989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.130 [2024-07-22 11:14:56.303471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.130 [2024-07-22 11:14:56.303504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.130 [2024-07-22 11:14:56.303515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.130 [2024-07-22 11:14:56.308011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.130 [2024-07-22 11:14:56.308042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.130 [2024-07-22 11:14:56.308052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.130 [2024-07-22 11:14:56.312475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.130 [2024-07-22 11:14:56.312505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.130 [2024-07-22 11:14:56.312516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.130 [2024-07-22 11:14:56.317031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.130 [2024-07-22 11:14:56.317061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.130 [2024-07-22 11:14:56.317071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.130 [2024-07-22 11:14:56.321396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.130 [2024-07-22 11:14:56.321426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.130 [2024-07-22 11:14:56.321437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.130 [2024-07-22 11:14:56.325742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.130 [2024-07-22 11:14:56.325773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.130 [2024-07-22 11:14:56.325784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.130 [2024-07-22 11:14:56.330126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.130 [2024-07-22 11:14:56.330158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.130 [2024-07-22 11:14:56.330168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.389 [2024-07-22 11:14:56.334640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.389 [2024-07-22 11:14:56.334671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.389 [2024-07-22 11:14:56.334682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.389 [2024-07-22 11:14:56.339161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.389 [2024-07-22 11:14:56.339191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.389 [2024-07-22 11:14:56.339202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.389 [2024-07-22 11:14:56.343582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.389 [2024-07-22 11:14:56.343612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.389 [2024-07-22 11:14:56.343623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.389 [2024-07-22 11:14:56.348086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.389 [2024-07-22 11:14:56.348138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.389 [2024-07-22 11:14:56.348150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.389 [2024-07-22 11:14:56.352674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.389 [2024-07-22 11:14:56.352705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.389 [2024-07-22 11:14:56.352716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.389 [2024-07-22 11:14:56.357073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.389 [2024-07-22 11:14:56.357104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.389 [2024-07-22 11:14:56.357114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.389 [2024-07-22 11:14:56.361600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.389 [2024-07-22 11:14:56.361632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.389 [2024-07-22 11:14:56.361642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.389 [2024-07-22 11:14:56.366153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.389 [2024-07-22 11:14:56.366185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.389 [2024-07-22 11:14:56.366195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.389 [2024-07-22 11:14:56.370638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.389 [2024-07-22 11:14:56.370669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.389 [2024-07-22 11:14:56.370680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.389 [2024-07-22 11:14:56.375051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.389 [2024-07-22 11:14:56.375082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.389 [2024-07-22 11:14:56.375092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.389 [2024-07-22 11:14:56.379544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.389 [2024-07-22 11:14:56.379575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.390 [2024-07-22 11:14:56.379586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.390 [2024-07-22 11:14:56.384042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.390 [2024-07-22 11:14:56.384073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.390 [2024-07-22 11:14:56.384084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.390 [2024-07-22 11:14:56.388636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.390 [2024-07-22 11:14:56.388667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.390 [2024-07-22 11:14:56.388677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.390 [2024-07-22 11:14:56.393166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.390 [2024-07-22 11:14:56.393197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.390 [2024-07-22 11:14:56.393208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.390 [2024-07-22 11:14:56.397711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.390 [2024-07-22 11:14:56.397743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.390 [2024-07-22 11:14:56.397754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.390 [2024-07-22 11:14:56.402225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.390 [2024-07-22 11:14:56.402256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.390 [2024-07-22 11:14:56.402267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.390 [2024-07-22 11:14:56.406667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.390 [2024-07-22 11:14:56.406697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.390 [2024-07-22 11:14:56.406708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.390 [2024-07-22 11:14:56.411108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.390 [2024-07-22 11:14:56.411139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.390 [2024-07-22 11:14:56.411149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.390 [2024-07-22 11:14:56.415529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.390 [2024-07-22 11:14:56.415559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.390 [2024-07-22 11:14:56.415570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.390 [2024-07-22 11:14:56.420051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.390 [2024-07-22 11:14:56.420082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.390 [2024-07-22 11:14:56.420093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.390 [2024-07-22 11:14:56.424581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.390 [2024-07-22 11:14:56.424612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.390 [2024-07-22 11:14:56.424623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.390 [2024-07-22 11:14:56.429048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.390 [2024-07-22 11:14:56.429077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.390 [2024-07-22 11:14:56.429088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.390 [2024-07-22 11:14:56.433431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.390 [2024-07-22 11:14:56.433468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.390 [2024-07-22 11:14:56.433479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.390 [2024-07-22 11:14:56.437845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.390 [2024-07-22 11:14:56.437885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.390 [2024-07-22 11:14:56.437896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.390 [2024-07-22 11:14:56.442332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.390 [2024-07-22 11:14:56.442363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.390 [2024-07-22 11:14:56.442374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.390 [2024-07-22 11:14:56.446841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.390 [2024-07-22 11:14:56.446882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.390 [2024-07-22 11:14:56.446893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.390 [2024-07-22 11:14:56.451328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.390 [2024-07-22 11:14:56.451358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.390 [2024-07-22 11:14:56.451369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.390 [2024-07-22 11:14:56.455809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.390 [2024-07-22 11:14:56.455840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.390 [2024-07-22 11:14:56.455863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.390 [2024-07-22 11:14:56.460331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.390 [2024-07-22 11:14:56.460362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.390 [2024-07-22 11:14:56.460373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.390 [2024-07-22 11:14:56.464896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.390 [2024-07-22 11:14:56.464924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.390 [2024-07-22 11:14:56.464935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.390 [2024-07-22 11:14:56.469392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.390 [2024-07-22 11:14:56.469423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.390 [2024-07-22 11:14:56.469434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.390 [2024-07-22 11:14:56.473868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.390 [2024-07-22 11:14:56.473897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.390 [2024-07-22 11:14:56.473907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.390 [2024-07-22 11:14:56.478281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.390 [2024-07-22 11:14:56.478328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.390 [2024-07-22 11:14:56.478340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.390 [2024-07-22 11:14:56.482727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.390 [2024-07-22 11:14:56.482759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.390 [2024-07-22 11:14:56.482770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.390 [2024-07-22 11:14:56.487205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.390 [2024-07-22 11:14:56.487235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.390 [2024-07-22 11:14:56.487246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.390 [2024-07-22 11:14:56.491647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.390 [2024-07-22 11:14:56.491679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.390 [2024-07-22 11:14:56.491690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.390 [2024-07-22 11:14:56.496110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.390 [2024-07-22 11:14:56.496141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.390 [2024-07-22 11:14:56.496152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.390 [2024-07-22 11:14:56.500642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.390 [2024-07-22 11:14:56.500673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.390 [2024-07-22 11:14:56.500684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.390 [2024-07-22 11:14:56.505178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.390 [2024-07-22 11:14:56.505210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.390 [2024-07-22 11:14:56.505221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.390 [2024-07-22 11:14:56.509626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.390 [2024-07-22 11:14:56.509657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.391 [2024-07-22 11:14:56.509668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.391 [2024-07-22 11:14:56.514263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.391 [2024-07-22 11:14:56.514295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.391 [2024-07-22 11:14:56.514306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.391 [2024-07-22 11:14:56.518788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.391 [2024-07-22 11:14:56.518819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.391 [2024-07-22 11:14:56.518830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.391 [2024-07-22 11:14:56.523251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.391 [2024-07-22 11:14:56.523281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.391 [2024-07-22 11:14:56.523292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.391 [2024-07-22 11:14:56.527679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.391 [2024-07-22 11:14:56.527709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.391 [2024-07-22 11:14:56.527720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.391 [2024-07-22 11:14:56.532212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.391 [2024-07-22 11:14:56.532243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.391 [2024-07-22 11:14:56.532254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.391 [2024-07-22 11:14:56.536642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.391 [2024-07-22 11:14:56.536673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.391 [2024-07-22 11:14:56.536684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.391 [2024-07-22 11:14:56.541115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.391 [2024-07-22 11:14:56.541146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.391 [2024-07-22 11:14:56.541157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.391 [2024-07-22 11:14:56.545670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.391 [2024-07-22 11:14:56.545701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.391 [2024-07-22 11:14:56.545712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.391 [2024-07-22 11:14:56.550250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.391 [2024-07-22 11:14:56.550281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.391 [2024-07-22 11:14:56.550293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.391 [2024-07-22 11:14:56.554711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.391 [2024-07-22 11:14:56.554741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.391 [2024-07-22 11:14:56.554752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.391 [2024-07-22 11:14:56.559297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.391 [2024-07-22 11:14:56.559332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.391 [2024-07-22 11:14:56.559343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.391 [2024-07-22 11:14:56.563742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.391 [2024-07-22 11:14:56.563773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.391 [2024-07-22 11:14:56.563784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.391 [2024-07-22 11:14:56.568137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.391 [2024-07-22 11:14:56.568168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.391 [2024-07-22 11:14:56.568179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.391 [2024-07-22 11:14:56.572660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.391 [2024-07-22 11:14:56.572690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.391 [2024-07-22 11:14:56.572701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.391 [2024-07-22 11:14:56.577192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.391 [2024-07-22 11:14:56.577223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.391 [2024-07-22 11:14:56.577234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.391 [2024-07-22 11:14:56.581642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.391 [2024-07-22 11:14:56.581673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.391 [2024-07-22 11:14:56.581684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.391 [2024-07-22 11:14:56.586057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.391 [2024-07-22 11:14:56.586088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.391 [2024-07-22 11:14:56.586099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.391 [2024-07-22 11:14:56.590539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.391 [2024-07-22 11:14:56.590569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.391 [2024-07-22 11:14:56.590580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.391 [2024-07-22 11:14:56.595095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.391 [2024-07-22 11:14:56.595126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.391 [2024-07-22 11:14:56.595137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.650 [2024-07-22 11:14:56.599579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.650 [2024-07-22 11:14:56.599610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.650 [2024-07-22 11:14:56.599621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.650 [2024-07-22 11:14:56.604049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.650 [2024-07-22 11:14:56.604080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.650 [2024-07-22 11:14:56.604090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.650 [2024-07-22 11:14:56.608518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.650 [2024-07-22 11:14:56.608549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.650 [2024-07-22 11:14:56.608560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.650 [2024-07-22 11:14:56.612989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.650 [2024-07-22 11:14:56.613017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.650 [2024-07-22 11:14:56.613028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.650 [2024-07-22 11:14:56.617461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.650 [2024-07-22 11:14:56.617490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.650 [2024-07-22 11:14:56.617501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.650 [2024-07-22 11:14:56.622040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.650 [2024-07-22 11:14:56.622069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.650 [2024-07-22 11:14:56.622080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.650 [2024-07-22 11:14:56.626475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.650 [2024-07-22 11:14:56.626506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.650 [2024-07-22 11:14:56.626516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.650 [2024-07-22 11:14:56.630885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.650 [2024-07-22 11:14:56.630913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.650 [2024-07-22 11:14:56.630924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.650 [2024-07-22 11:14:56.635317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.650 [2024-07-22 11:14:56.635348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.650 [2024-07-22 11:14:56.635359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.650 [2024-07-22 11:14:56.639826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.650 [2024-07-22 11:14:56.639869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.650 [2024-07-22 11:14:56.639882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.650 [2024-07-22 11:14:56.644314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.650 [2024-07-22 11:14:56.644345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.650 [2024-07-22 11:14:56.644355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.650 [2024-07-22 11:14:56.648790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.650 [2024-07-22 11:14:56.648821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.650 [2024-07-22 11:14:56.648832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.650 [2024-07-22 11:14:56.653245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.650 [2024-07-22 11:14:56.653275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.651 [2024-07-22 11:14:56.653286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.651 [2024-07-22 11:14:56.657755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.651 [2024-07-22 11:14:56.657786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.651 [2024-07-22 11:14:56.657797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.651 [2024-07-22 11:14:56.662246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.651 [2024-07-22 11:14:56.662281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.651 [2024-07-22 11:14:56.662292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.651 [2024-07-22 11:14:56.666758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.651 [2024-07-22 11:14:56.666789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.651 [2024-07-22 11:14:56.666800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.651 [2024-07-22 11:14:56.671193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.651 [2024-07-22 11:14:56.671224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.651 [2024-07-22 11:14:56.671235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.651 [2024-07-22 11:14:56.675661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.651 [2024-07-22 11:14:56.675692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.651 [2024-07-22 11:14:56.675703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.651 [2024-07-22 11:14:56.680127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.651 [2024-07-22 11:14:56.680159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.651 [2024-07-22 11:14:56.680170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.651 [2024-07-22 11:14:56.684589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.651 [2024-07-22 11:14:56.684620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.651 [2024-07-22 11:14:56.684631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.651 [2024-07-22 11:14:56.689015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.651 [2024-07-22 11:14:56.689045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.651 [2024-07-22 11:14:56.689057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.651 [2024-07-22 11:14:56.693395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.651 [2024-07-22 11:14:56.693426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.651 [2024-07-22 11:14:56.693436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.651 [2024-07-22 11:14:56.698012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.651 [2024-07-22 11:14:56.698058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.651 [2024-07-22 11:14:56.698070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.651 [2024-07-22 11:14:56.702653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.651 [2024-07-22 11:14:56.702684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.651 [2024-07-22 11:14:56.702695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.651 [2024-07-22 11:14:56.707111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.651 [2024-07-22 11:14:56.707142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.651 [2024-07-22 11:14:56.707152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.651 [2024-07-22 11:14:56.711595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.651 [2024-07-22 11:14:56.711626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.651 [2024-07-22 11:14:56.711637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.651 [2024-07-22 11:14:56.716024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.651 [2024-07-22 11:14:56.716055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.651 [2024-07-22 11:14:56.716065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.651 [2024-07-22 11:14:56.720391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.651 [2024-07-22 11:14:56.720422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.651 [2024-07-22 11:14:56.720433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.651 [2024-07-22 11:14:56.724798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.651 [2024-07-22 11:14:56.724828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.651 [2024-07-22 11:14:56.724839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.651 [2024-07-22 11:14:56.729363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.651 [2024-07-22 11:14:56.729394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.651 [2024-07-22 11:14:56.729405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.651 [2024-07-22 11:14:56.733965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.651 [2024-07-22 11:14:56.733993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.651 [2024-07-22 11:14:56.734004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.651 [2024-07-22 11:14:56.738436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.651 [2024-07-22 11:14:56.738467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.651 [2024-07-22 11:14:56.738478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.651 [2024-07-22 11:14:56.742863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.651 [2024-07-22 11:14:56.742901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.651 [2024-07-22 11:14:56.742912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.651 [2024-07-22 11:14:56.747275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.651 [2024-07-22 11:14:56.747305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.651 [2024-07-22 11:14:56.747316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.651 [2024-07-22 11:14:56.751736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.651 [2024-07-22 11:14:56.751767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.651 [2024-07-22 11:14:56.751777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.651 [2024-07-22 11:14:56.756223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.651 [2024-07-22 11:14:56.756253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.651 [2024-07-22 11:14:56.756264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.651 [2024-07-22 11:14:56.760613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.651 [2024-07-22 11:14:56.760645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.651 [2024-07-22 11:14:56.760656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.651 [2024-07-22 11:14:56.765137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.651 [2024-07-22 11:14:56.765172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.651 [2024-07-22 11:14:56.765183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.651 [2024-07-22 11:14:56.769649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.651 [2024-07-22 11:14:56.769682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.651 [2024-07-22 11:14:56.769693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.651 [2024-07-22 11:14:56.774191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.651 [2024-07-22 11:14:56.774223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.651 [2024-07-22 11:14:56.774234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.651 [2024-07-22 11:14:56.778634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.651 [2024-07-22 11:14:56.778667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.651 [2024-07-22 11:14:56.778678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.651 [2024-07-22 11:14:56.783140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.651 [2024-07-22 11:14:56.783172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.652 [2024-07-22 11:14:56.783183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.652 [2024-07-22 11:14:56.787672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.652 [2024-07-22 11:14:56.787703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.652 [2024-07-22 11:14:56.787714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.652 [2024-07-22 11:14:56.792103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.652 [2024-07-22 11:14:56.792134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.652 [2024-07-22 11:14:56.792145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.652 [2024-07-22 11:14:56.796484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.652 [2024-07-22 11:14:56.796514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.652 [2024-07-22 11:14:56.796525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.652 [2024-07-22 11:14:56.800863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.652 [2024-07-22 11:14:56.800891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.652 [2024-07-22 11:14:56.800901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.652 [2024-07-22 11:14:56.805287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.652 [2024-07-22 11:14:56.805318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.652 [2024-07-22 11:14:56.805329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.652 [2024-07-22 11:14:56.809757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.652 [2024-07-22 11:14:56.809789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.652 [2024-07-22 11:14:56.809800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.652 [2024-07-22 11:14:56.814311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.652 [2024-07-22 11:14:56.814342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.652 [2024-07-22 11:14:56.814354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.652 [2024-07-22 11:14:56.818830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.652 [2024-07-22 11:14:56.818871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.652 [2024-07-22 11:14:56.818882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.652 [2024-07-22 11:14:56.823242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.652 [2024-07-22 11:14:56.823273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.652 [2024-07-22 11:14:56.823284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.652 [2024-07-22 11:14:56.827800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.652 [2024-07-22 11:14:56.827831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.652 [2024-07-22 11:14:56.827842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.652 [2024-07-22 11:14:56.832408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.652 [2024-07-22 11:14:56.832442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.652 [2024-07-22 11:14:56.832453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.652 [2024-07-22 11:14:56.836975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.652 [2024-07-22 11:14:56.837005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.652 [2024-07-22 11:14:56.837016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.652 [2024-07-22 11:14:56.841548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.652 [2024-07-22 11:14:56.841580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.652 [2024-07-22 11:14:56.841591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.652 [2024-07-22 11:14:56.846121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.652 [2024-07-22 11:14:56.846152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.652 [2024-07-22 11:14:56.846164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.652 [2024-07-22 11:14:56.850643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.652 [2024-07-22 11:14:56.850673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.652 [2024-07-22 11:14:56.850684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.652 [2024-07-22 11:14:56.855245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.652 [2024-07-22 11:14:56.855276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.652 [2024-07-22 11:14:56.855286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.912 [2024-07-22 11:14:56.859777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.912 [2024-07-22 11:14:56.859808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.912 [2024-07-22 11:14:56.859819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.912 [2024-07-22 11:14:56.864325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.912 [2024-07-22 11:14:56.864356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.912 [2024-07-22 11:14:56.864367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.912 [2024-07-22 11:14:56.868961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.912 [2024-07-22 11:14:56.868993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.912 [2024-07-22 11:14:56.869004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.912 [2024-07-22 11:14:56.873423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.912 [2024-07-22 11:14:56.873463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.912 [2024-07-22 11:14:56.873474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.912 [2024-07-22 11:14:56.878039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.912 [2024-07-22 11:14:56.878078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.912 [2024-07-22 11:14:56.878090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.912 [2024-07-22 11:14:56.882652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.912 [2024-07-22 11:14:56.882685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.912 [2024-07-22 11:14:56.882696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.912 [2024-07-22 11:14:56.887206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.912 [2024-07-22 11:14:56.887239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.912 [2024-07-22 11:14:56.887250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.912 [2024-07-22 11:14:56.891617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.912 [2024-07-22 11:14:56.891650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.912 [2024-07-22 11:14:56.891662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.912 [2024-07-22 11:14:56.896186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.912 [2024-07-22 11:14:56.896219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.912 [2024-07-22 11:14:56.896230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.912 [2024-07-22 11:14:56.900752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.912 [2024-07-22 11:14:56.900787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.912 [2024-07-22 11:14:56.900798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.912 [2024-07-22 11:14:56.905295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.912 [2024-07-22 11:14:56.905327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.912 [2024-07-22 11:14:56.905338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.912 [2024-07-22 11:14:56.909957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.912 [2024-07-22 11:14:56.909989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.912 [2024-07-22 11:14:56.910001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.912 [2024-07-22 11:14:56.914563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.912 [2024-07-22 11:14:56.914595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.912 [2024-07-22 11:14:56.914606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.912 [2024-07-22 11:14:56.919169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.913 [2024-07-22 11:14:56.919203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.913 [2024-07-22 11:14:56.919214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.913 [2024-07-22 11:14:56.923676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.913 [2024-07-22 11:14:56.923708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.913 [2024-07-22 11:14:56.923720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.913 [2024-07-22 11:14:56.928394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.913 [2024-07-22 11:14:56.928429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.913 [2024-07-22 11:14:56.928440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.913 [2024-07-22 11:14:56.932894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.913 [2024-07-22 11:14:56.932924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.913 [2024-07-22 11:14:56.932935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.913 [2024-07-22 11:14:56.937399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.913 [2024-07-22 11:14:56.937433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.913 [2024-07-22 11:14:56.937452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.913 [2024-07-22 11:14:56.941986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.913 [2024-07-22 11:14:56.942017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.913 [2024-07-22 11:14:56.942029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.913 [2024-07-22 11:14:56.946477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.913 [2024-07-22 11:14:56.946512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.913 [2024-07-22 11:14:56.946524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.913 [2024-07-22 11:14:56.950991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.913 [2024-07-22 11:14:56.951020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.913 [2024-07-22 11:14:56.951032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.913 [2024-07-22 11:14:56.955570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.913 [2024-07-22 11:14:56.955602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.913 [2024-07-22 11:14:56.955614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.913 [2024-07-22 11:14:56.960161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.913 [2024-07-22 11:14:56.960194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.913 [2024-07-22 11:14:56.960205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.913 [2024-07-22 11:14:56.964685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.913 [2024-07-22 11:14:56.964716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.913 [2024-07-22 11:14:56.964727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.913 [2024-07-22 11:14:56.969109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.913 [2024-07-22 11:14:56.969140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.913 [2024-07-22 11:14:56.969151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.913 [2024-07-22 11:14:56.973534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.913 [2024-07-22 11:14:56.973564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.913 [2024-07-22 11:14:56.973575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.913 [2024-07-22 11:14:56.978017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.913 [2024-07-22 11:14:56.978048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.913 [2024-07-22 11:14:56.978058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.913 [2024-07-22 11:14:56.982495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.913 [2024-07-22 11:14:56.982526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.913 [2024-07-22 11:14:56.982537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.913 [2024-07-22 11:14:56.986993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.913 [2024-07-22 11:14:56.987022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.913 [2024-07-22 11:14:56.987033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.913 [2024-07-22 11:14:56.991453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.913 [2024-07-22 11:14:56.991484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.913 [2024-07-22 11:14:56.991494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.913 [2024-07-22 11:14:56.995944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.913 [2024-07-22 11:14:56.995975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.913 [2024-07-22 11:14:56.995986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.913 [2024-07-22 11:14:57.000379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.913 [2024-07-22 11:14:57.000411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.913 [2024-07-22 11:14:57.000422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.913 [2024-07-22 11:14:57.004756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.913 [2024-07-22 11:14:57.004788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.913 [2024-07-22 11:14:57.004799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.913 [2024-07-22 11:14:57.009358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.913 [2024-07-22 11:14:57.009390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.913 [2024-07-22 11:14:57.009401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.913 [2024-07-22 11:14:57.013861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.913 [2024-07-22 11:14:57.013890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.913 [2024-07-22 11:14:57.013900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.913 [2024-07-22 11:14:57.018283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.913 [2024-07-22 11:14:57.018314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.913 [2024-07-22 11:14:57.018325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.913 [2024-07-22 11:14:57.022741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.913 [2024-07-22 11:14:57.022771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.913 [2024-07-22 11:14:57.022782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.913 [2024-07-22 11:14:57.027223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.913 [2024-07-22 11:14:57.027254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.913 [2024-07-22 11:14:57.027265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.913 [2024-07-22 11:14:57.031815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.913 [2024-07-22 11:14:57.031859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.913 [2024-07-22 11:14:57.031870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.913 [2024-07-22 11:14:57.036361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.913 [2024-07-22 11:14:57.036393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.913 [2024-07-22 11:14:57.036404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.913 [2024-07-22 11:14:57.040843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.913 [2024-07-22 11:14:57.040881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.913 [2024-07-22 11:14:57.040893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.913 [2024-07-22 11:14:57.045244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.913 [2024-07-22 11:14:57.045275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.913 [2024-07-22 11:14:57.045286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.913 [2024-07-22 11:14:57.049648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.914 [2024-07-22 11:14:57.049679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.914 [2024-07-22 11:14:57.049690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.914 [2024-07-22 11:14:57.054100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.914 [2024-07-22 11:14:57.054132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.914 [2024-07-22 11:14:57.054142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.914 [2024-07-22 11:14:57.058587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.914 [2024-07-22 11:14:57.058618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.914 [2024-07-22 11:14:57.058629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.914 [2024-07-22 11:14:57.063014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.914 [2024-07-22 11:14:57.063043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.914 [2024-07-22 11:14:57.063054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.914 [2024-07-22 11:14:57.067517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.914 [2024-07-22 11:14:57.067549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.914 [2024-07-22 11:14:57.067559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.914 [2024-07-22 11:14:57.072725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.914 [2024-07-22 11:14:57.072756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.914 [2024-07-22 11:14:57.072767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.914 [2024-07-22 11:14:57.077170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.914 [2024-07-22 11:14:57.077201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.914 [2024-07-22 11:14:57.077211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.914 [2024-07-22 11:14:57.081591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.914 [2024-07-22 11:14:57.081621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.914 [2024-07-22 11:14:57.081631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.914 [2024-07-22 11:14:57.086048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.914 [2024-07-22 11:14:57.086076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.914 [2024-07-22 11:14:57.086087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.914 [2024-07-22 11:14:57.090627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.914 [2024-07-22 11:14:57.090658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.914 [2024-07-22 11:14:57.090670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.914 [2024-07-22 11:14:57.095197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.914 [2024-07-22 11:14:57.095228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.914 [2024-07-22 11:14:57.095239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.914 [2024-07-22 11:14:57.099633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.914 [2024-07-22 11:14:57.099665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.914 [2024-07-22 11:14:57.099675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:51.914 [2024-07-22 11:14:57.104091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.914 [2024-07-22 11:14:57.104122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.914 [2024-07-22 11:14:57.104132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:51.914 [2024-07-22 11:14:57.108556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.914 [2024-07-22 11:14:57.108587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.914 [2024-07-22 11:14:57.108597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:51.914 [2024-07-22 11:14:57.113026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.914 [2024-07-22 11:14:57.113055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.914 [2024-07-22 11:14:57.113066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:51.914 [2024-07-22 11:14:57.117490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:51.914 [2024-07-22 11:14:57.117520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:51.914 [2024-07-22 11:14:57.117531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.174 [2024-07-22 11:14:57.121936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.174 [2024-07-22 11:14:57.121965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.174 [2024-07-22 11:14:57.121975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.174 [2024-07-22 11:14:57.126389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.174 [2024-07-22 11:14:57.126419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.174 [2024-07-22 11:14:57.126430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.174 [2024-07-22 11:14:57.130807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.174 [2024-07-22 11:14:57.130838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.174 [2024-07-22 11:14:57.130862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.174 [2024-07-22 11:14:57.135171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.174 [2024-07-22 11:14:57.135202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.174 [2024-07-22 11:14:57.135212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.174 [2024-07-22 11:14:57.139616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.174 [2024-07-22 11:14:57.139646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.174 [2024-07-22 11:14:57.139657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.174 [2024-07-22 11:14:57.144127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.174 [2024-07-22 11:14:57.144157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.174 [2024-07-22 11:14:57.144168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.174 [2024-07-22 11:14:57.148589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.174 [2024-07-22 11:14:57.148619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.174 [2024-07-22 11:14:57.148630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.174 [2024-07-22 11:14:57.153055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.174 [2024-07-22 11:14:57.153086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.174 [2024-07-22 11:14:57.153097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.174 [2024-07-22 11:14:57.157596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.174 [2024-07-22 11:14:57.157626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.174 [2024-07-22 11:14:57.157637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.174 [2024-07-22 11:14:57.162085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.174 [2024-07-22 11:14:57.162117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.174 [2024-07-22 11:14:57.162128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.174 [2024-07-22 11:14:57.166483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.174 [2024-07-22 11:14:57.166513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.174 [2024-07-22 11:14:57.166524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.174 [2024-07-22 11:14:57.170924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.174 [2024-07-22 11:14:57.170953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.174 [2024-07-22 11:14:57.170964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.174 [2024-07-22 11:14:57.175299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.174 [2024-07-22 11:14:57.175330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.174 [2024-07-22 11:14:57.175341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.174 [2024-07-22 11:14:57.179763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.174 [2024-07-22 11:14:57.179794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.174 [2024-07-22 11:14:57.179804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.174 [2024-07-22 11:14:57.184256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.174 [2024-07-22 11:14:57.184286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.174 [2024-07-22 11:14:57.184297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.174 [2024-07-22 11:14:57.188766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.174 [2024-07-22 11:14:57.188796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.174 [2024-07-22 11:14:57.188807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.174 [2024-07-22 11:14:57.193160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.174 [2024-07-22 11:14:57.193192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.174 [2024-07-22 11:14:57.193203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.174 [2024-07-22 11:14:57.197646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.174 [2024-07-22 11:14:57.197677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.174 [2024-07-22 11:14:57.197688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.174 [2024-07-22 11:14:57.202176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.174 [2024-07-22 11:14:57.202208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.174 [2024-07-22 11:14:57.202218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.174 [2024-07-22 11:14:57.206640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.174 [2024-07-22 11:14:57.206670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.174 [2024-07-22 11:14:57.206681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.174 [2024-07-22 11:14:57.211126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.174 [2024-07-22 11:14:57.211156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.174 [2024-07-22 11:14:57.211167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.174 [2024-07-22 11:14:57.215594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.174 [2024-07-22 11:14:57.215624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.174 [2024-07-22 11:14:57.215635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.174 [2024-07-22 11:14:57.220097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.174 [2024-07-22 11:14:57.220128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.174 [2024-07-22 11:14:57.220139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.174 [2024-07-22 11:14:57.224518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.174 [2024-07-22 11:14:57.224548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.174 [2024-07-22 11:14:57.224559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.174 [2024-07-22 11:14:57.229044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.175 [2024-07-22 11:14:57.229075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.175 [2024-07-22 11:14:57.229086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.175 [2024-07-22 11:14:57.233433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.175 [2024-07-22 11:14:57.233470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.175 [2024-07-22 11:14:57.233481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.175 [2024-07-22 11:14:57.237927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.175 [2024-07-22 11:14:57.237956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.175 [2024-07-22 11:14:57.237967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.175 [2024-07-22 11:14:57.242453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.175 [2024-07-22 11:14:57.242484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.175 [2024-07-22 11:14:57.242495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.175 [2024-07-22 11:14:57.246900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.175 [2024-07-22 11:14:57.246930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.175 [2024-07-22 11:14:57.246940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.175 [2024-07-22 11:14:57.251366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.175 [2024-07-22 11:14:57.251397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.175 [2024-07-22 11:14:57.251409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.175 [2024-07-22 11:14:57.255796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.175 [2024-07-22 11:14:57.255828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.175 [2024-07-22 11:14:57.255839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.175 [2024-07-22 11:14:57.260256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.175 [2024-07-22 11:14:57.260288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.175 [2024-07-22 11:14:57.260299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.175 [2024-07-22 11:14:57.264794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.175 [2024-07-22 11:14:57.264826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.175 [2024-07-22 11:14:57.264837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.175 [2024-07-22 11:14:57.269455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.175 [2024-07-22 11:14:57.269502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.175 [2024-07-22 11:14:57.269513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.175 [2024-07-22 11:14:57.274210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.175 [2024-07-22 11:14:57.274245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.175 [2024-07-22 11:14:57.274258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.175 [2024-07-22 11:14:57.278907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.175 [2024-07-22 11:14:57.278935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.175 [2024-07-22 11:14:57.278947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.175 [2024-07-22 11:14:57.283473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.175 [2024-07-22 11:14:57.283506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.175 [2024-07-22 11:14:57.283517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.175 [2024-07-22 11:14:57.287947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.175 [2024-07-22 11:14:57.287977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.175 [2024-07-22 11:14:57.287987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.175 [2024-07-22 11:14:57.292406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.175 [2024-07-22 11:14:57.292438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.175 [2024-07-22 11:14:57.292449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.175 [2024-07-22 11:14:57.296863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.175 [2024-07-22 11:14:57.296892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.175 [2024-07-22 11:14:57.296903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.175 [2024-07-22 11:14:57.301348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.175 [2024-07-22 11:14:57.301380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.175 [2024-07-22 11:14:57.301390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.175 [2024-07-22 11:14:57.305827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.175 [2024-07-22 11:14:57.305871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.175 [2024-07-22 11:14:57.305883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.175 [2024-07-22 11:14:57.310345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.175 [2024-07-22 11:14:57.310376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.175 [2024-07-22 11:14:57.310387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.175 [2024-07-22 11:14:57.314758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.175 [2024-07-22 11:14:57.314789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.175 [2024-07-22 11:14:57.314799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.175 [2024-07-22 11:14:57.319165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.175 [2024-07-22 11:14:57.319197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.175 [2024-07-22 11:14:57.319208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.175 [2024-07-22 11:14:57.323567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.175 [2024-07-22 11:14:57.323599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.175 [2024-07-22 11:14:57.323609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.175 [2024-07-22 11:14:57.328080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.175 [2024-07-22 11:14:57.328112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.175 [2024-07-22 11:14:57.328122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.175 [2024-07-22 11:14:57.332560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.175 [2024-07-22 11:14:57.332590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.175 [2024-07-22 11:14:57.332601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.175 [2024-07-22 11:14:57.337001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.175 [2024-07-22 11:14:57.337029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.175 [2024-07-22 11:14:57.337039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.175 [2024-07-22 11:14:57.341414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.175 [2024-07-22 11:14:57.341451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.175 [2024-07-22 11:14:57.341462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.175 [2024-07-22 11:14:57.345860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.175 [2024-07-22 11:14:57.345889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.175 [2024-07-22 11:14:57.345900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.175 [2024-07-22 11:14:57.350338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.175 [2024-07-22 11:14:57.350369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.175 [2024-07-22 11:14:57.350380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.175 [2024-07-22 11:14:57.354747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.175 [2024-07-22 11:14:57.354778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.175 [2024-07-22 11:14:57.354788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.175 [2024-07-22 11:14:57.359156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.176 [2024-07-22 11:14:57.359186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.176 [2024-07-22 11:14:57.359197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.176 [2024-07-22 11:14:57.363687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.176 [2024-07-22 11:14:57.363718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.176 [2024-07-22 11:14:57.363729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.176 [2024-07-22 11:14:57.368164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.176 [2024-07-22 11:14:57.368195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.176 [2024-07-22 11:14:57.368206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.176 [2024-07-22 11:14:57.372577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.176 [2024-07-22 11:14:57.372609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.176 [2024-07-22 11:14:57.372619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.176 [2024-07-22 11:14:57.377036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.176 [2024-07-22 11:14:57.377065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.176 [2024-07-22 11:14:57.377075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.436 [2024-07-22 11:14:57.381455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.436 [2024-07-22 11:14:57.381485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.436 [2024-07-22 11:14:57.381496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.436 [2024-07-22 11:14:57.385955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.436 [2024-07-22 11:14:57.385983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.436 [2024-07-22 11:14:57.385994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.436 [2024-07-22 11:14:57.390545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.436 [2024-07-22 11:14:57.390579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.436 [2024-07-22 11:14:57.390591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.436 [2024-07-22 11:14:57.395030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.436 [2024-07-22 11:14:57.395061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.436 [2024-07-22 11:14:57.395072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.436 [2024-07-22 11:14:57.399515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.436 [2024-07-22 11:14:57.399545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.436 [2024-07-22 11:14:57.399556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.436 [2024-07-22 11:14:57.404066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.436 [2024-07-22 11:14:57.404096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.436 [2024-07-22 11:14:57.404107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.436 [2024-07-22 11:14:57.408476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.436 [2024-07-22 11:14:57.408507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.436 [2024-07-22 11:14:57.408517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.436 [2024-07-22 11:14:57.412973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.436 [2024-07-22 11:14:57.413002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.436 [2024-07-22 11:14:57.413012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.436 [2024-07-22 11:14:57.417371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.436 [2024-07-22 11:14:57.417402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.436 [2024-07-22 11:14:57.417413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.436 [2024-07-22 11:14:57.421868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.436 [2024-07-22 11:14:57.421896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.436 [2024-07-22 11:14:57.421907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.436 [2024-07-22 11:14:57.426261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.436 [2024-07-22 11:14:57.426291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.436 [2024-07-22 11:14:57.426302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.436 [2024-07-22 11:14:57.430642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.436 [2024-07-22 11:14:57.430672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.436 [2024-07-22 11:14:57.430683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.437 [2024-07-22 11:14:57.435129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.437 [2024-07-22 11:14:57.435159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.437 [2024-07-22 11:14:57.435170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.437 [2024-07-22 11:14:57.439544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.437 [2024-07-22 11:14:57.439574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.437 [2024-07-22 11:14:57.439585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.437 [2024-07-22 11:14:57.444149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.437 [2024-07-22 11:14:57.444179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.437 [2024-07-22 11:14:57.444190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.437 [2024-07-22 11:14:57.448618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.437 [2024-07-22 11:14:57.448650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.437 [2024-07-22 11:14:57.448660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.437 [2024-07-22 11:14:57.453087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.437 [2024-07-22 11:14:57.453117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.437 [2024-07-22 11:14:57.453128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.437 [2024-07-22 11:14:57.457538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.437 [2024-07-22 11:14:57.457568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.437 [2024-07-22 11:14:57.457579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.437 [2024-07-22 11:14:57.462154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.437 [2024-07-22 11:14:57.462185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.437 [2024-07-22 11:14:57.462196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.437 [2024-07-22 11:14:57.466687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.437 [2024-07-22 11:14:57.466718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.437 [2024-07-22 11:14:57.466728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.437 [2024-07-22 11:14:57.471133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.437 [2024-07-22 11:14:57.471163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.437 [2024-07-22 11:14:57.471174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.437 [2024-07-22 11:14:57.475539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.437 [2024-07-22 11:14:57.475570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.437 [2024-07-22 11:14:57.475581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.437 [2024-07-22 11:14:57.480078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.437 [2024-07-22 11:14:57.480108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.437 [2024-07-22 11:14:57.480119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.437 [2024-07-22 11:14:57.484535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.437 [2024-07-22 11:14:57.484564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.437 [2024-07-22 11:14:57.484575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.437 [2024-07-22 11:14:57.488969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.437 [2024-07-22 11:14:57.488999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.437 [2024-07-22 11:14:57.489009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.437 [2024-07-22 11:14:57.493471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.437 [2024-07-22 11:14:57.493501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.437 [2024-07-22 11:14:57.493512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.437 [2024-07-22 11:14:57.497962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.437 [2024-07-22 11:14:57.497991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.437 [2024-07-22 11:14:57.498002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.437 [2024-07-22 11:14:57.502376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.437 [2024-07-22 11:14:57.502408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.437 [2024-07-22 11:14:57.502419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.437 [2024-07-22 11:14:57.506882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.437 [2024-07-22 11:14:57.506925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.437 [2024-07-22 11:14:57.506936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.437 [2024-07-22 11:14:57.511358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.437 [2024-07-22 11:14:57.511389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.437 [2024-07-22 11:14:57.511400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.437 [2024-07-22 11:14:57.515865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.437 [2024-07-22 11:14:57.515893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.437 [2024-07-22 11:14:57.515904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.437 [2024-07-22 11:14:57.520218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.437 [2024-07-22 11:14:57.520248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.437 [2024-07-22 11:14:57.520258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.437 [2024-07-22 11:14:57.524701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.437 [2024-07-22 11:14:57.524732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.437 [2024-07-22 11:14:57.524743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.437 [2024-07-22 11:14:57.529113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.437 [2024-07-22 11:14:57.529145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.437 [2024-07-22 11:14:57.529156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.437 [2024-07-22 11:14:57.533575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.437 [2024-07-22 11:14:57.533606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.437 [2024-07-22 11:14:57.533617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.437 [2024-07-22 11:14:57.538022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.437 [2024-07-22 11:14:57.538050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.437 [2024-07-22 11:14:57.538061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.437 [2024-07-22 11:14:57.542448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.437 [2024-07-22 11:14:57.542479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.437 [2024-07-22 11:14:57.542490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.437 [2024-07-22 11:14:57.546932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.437 [2024-07-22 11:14:57.546960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.437 [2024-07-22 11:14:57.546971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.437 [2024-07-22 11:14:57.551410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.437 [2024-07-22 11:14:57.551440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.437 [2024-07-22 11:14:57.551451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.437 [2024-07-22 11:14:57.555828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.437 [2024-07-22 11:14:57.555870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.437 [2024-07-22 11:14:57.555881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.437 [2024-07-22 11:14:57.560189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.437 [2024-07-22 11:14:57.560218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.437 [2024-07-22 11:14:57.560229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.437 [2024-07-22 11:14:57.564633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.438 [2024-07-22 11:14:57.564663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.438 [2024-07-22 11:14:57.564674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.438 [2024-07-22 11:14:57.569111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.438 [2024-07-22 11:14:57.569141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.438 [2024-07-22 11:14:57.569152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.438 [2024-07-22 11:14:57.573537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.438 [2024-07-22 11:14:57.573566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.438 [2024-07-22 11:14:57.573577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.438 [2024-07-22 11:14:57.578004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.438 [2024-07-22 11:14:57.578033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.438 [2024-07-22 11:14:57.578044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.438 [2024-07-22 11:14:57.582471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.438 [2024-07-22 11:14:57.582504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.438 [2024-07-22 11:14:57.582515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.438 [2024-07-22 11:14:57.586989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.438 [2024-07-22 11:14:57.587017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.438 [2024-07-22 11:14:57.587028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.438 [2024-07-22 11:14:57.591415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.438 [2024-07-22 11:14:57.591446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.438 [2024-07-22 11:14:57.591457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.438 [2024-07-22 11:14:57.595902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.438 [2024-07-22 11:14:57.595931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.438 [2024-07-22 11:14:57.595942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.438 [2024-07-22 11:14:57.600344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.438 [2024-07-22 11:14:57.600376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.438 [2024-07-22 11:14:57.600386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.438 [2024-07-22 11:14:57.604770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.438 [2024-07-22 11:14:57.604800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.438 [2024-07-22 11:14:57.604811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.438 [2024-07-22 11:14:57.609185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.438 [2024-07-22 11:14:57.609215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.438 [2024-07-22 11:14:57.609226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.438 [2024-07-22 11:14:57.613586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.438 [2024-07-22 11:14:57.613616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.438 [2024-07-22 11:14:57.613627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.438 [2024-07-22 11:14:57.618164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.438 [2024-07-22 11:14:57.618196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.438 [2024-07-22 11:14:57.618208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.438 [2024-07-22 11:14:57.622647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.438 [2024-07-22 11:14:57.622677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.438 [2024-07-22 11:14:57.622688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.438 [2024-07-22 11:14:57.627101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.438 [2024-07-22 11:14:57.627132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.438 [2024-07-22 11:14:57.627143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.438 [2024-07-22 11:14:57.631499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.438 [2024-07-22 11:14:57.631530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.438 [2024-07-22 11:14:57.631540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.438 [2024-07-22 11:14:57.635936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.438 [2024-07-22 11:14:57.635966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.438 [2024-07-22 11:14:57.635977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.438 [2024-07-22 11:14:57.640301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.438 [2024-07-22 11:14:57.640331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.438 [2024-07-22 11:14:57.640342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.698 [2024-07-22 11:14:57.644816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.698 [2024-07-22 11:14:57.644858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.698 [2024-07-22 11:14:57.644870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.698 [2024-07-22 11:14:57.649370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.698 [2024-07-22 11:14:57.649402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.698 [2024-07-22 11:14:57.649413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.698 [2024-07-22 11:14:57.653905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.698 [2024-07-22 11:14:57.653934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.698 [2024-07-22 11:14:57.653945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.698 [2024-07-22 11:14:57.658350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.698 [2024-07-22 11:14:57.658381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.698 [2024-07-22 11:14:57.658392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.698 [2024-07-22 11:14:57.662780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.698 [2024-07-22 11:14:57.662812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.698 [2024-07-22 11:14:57.662823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.698 [2024-07-22 11:14:57.667267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.698 [2024-07-22 11:14:57.667298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.698 [2024-07-22 11:14:57.667310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.698 [2024-07-22 11:14:57.671764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.698 [2024-07-22 11:14:57.671795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.698 [2024-07-22 11:14:57.671805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.698 [2024-07-22 11:14:57.676236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.698 [2024-07-22 11:14:57.676269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.698 [2024-07-22 11:14:57.676280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.698 [2024-07-22 11:14:57.680742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.698 [2024-07-22 11:14:57.680773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.698 [2024-07-22 11:14:57.680784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.698 [2024-07-22 11:14:57.685281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.698 [2024-07-22 11:14:57.685311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.698 [2024-07-22 11:14:57.685322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.698 [2024-07-22 11:14:57.689783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.698 [2024-07-22 11:14:57.689815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.698 [2024-07-22 11:14:57.689826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.698 [2024-07-22 11:14:57.694248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.698 [2024-07-22 11:14:57.694278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.698 [2024-07-22 11:14:57.694289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.698 [2024-07-22 11:14:57.698622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.698 [2024-07-22 11:14:57.698654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.698 [2024-07-22 11:14:57.698665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.698 [2024-07-22 11:14:57.703150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.698 [2024-07-22 11:14:57.703182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.698 [2024-07-22 11:14:57.703193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.698 [2024-07-22 11:14:57.707637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.698 [2024-07-22 11:14:57.707668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.698 [2024-07-22 11:14:57.707679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.698 [2024-07-22 11:14:57.712132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.698 [2024-07-22 11:14:57.712162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.698 [2024-07-22 11:14:57.712173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.698 [2024-07-22 11:14:57.716618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.698 [2024-07-22 11:14:57.716648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.698 [2024-07-22 11:14:57.716659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.698 [2024-07-22 11:14:57.721155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.698 [2024-07-22 11:14:57.721186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.698 [2024-07-22 11:14:57.721197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.698 [2024-07-22 11:14:57.725731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.698 [2024-07-22 11:14:57.725763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.698 [2024-07-22 11:14:57.725774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.698 [2024-07-22 11:14:57.730200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.698 [2024-07-22 11:14:57.730230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.698 [2024-07-22 11:14:57.730241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.699 [2024-07-22 11:14:57.734637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.699 [2024-07-22 11:14:57.734667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.699 [2024-07-22 11:14:57.734678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.699 [2024-07-22 11:14:57.739068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.699 [2024-07-22 11:14:57.739099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.699 [2024-07-22 11:14:57.739109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.699 [2024-07-22 11:14:57.743611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.699 [2024-07-22 11:14:57.743642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.699 [2024-07-22 11:14:57.743653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.699 [2024-07-22 11:14:57.748085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.699 [2024-07-22 11:14:57.748116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.699 [2024-07-22 11:14:57.748127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.699 [2024-07-22 11:14:57.752626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.699 [2024-07-22 11:14:57.752657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.699 [2024-07-22 11:14:57.752668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.699 [2024-07-22 11:14:57.757148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.699 [2024-07-22 11:14:57.757179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.699 [2024-07-22 11:14:57.757189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.699 [2024-07-22 11:14:57.761646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.699 [2024-07-22 11:14:57.761677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.699 [2024-07-22 11:14:57.761688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.699 [2024-07-22 11:14:57.766151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.699 [2024-07-22 11:14:57.766182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.699 [2024-07-22 11:14:57.766192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.699 [2024-07-22 11:14:57.770563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.699 [2024-07-22 11:14:57.770594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.699 [2024-07-22 11:14:57.770605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.699 [2024-07-22 11:14:57.775041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.699 [2024-07-22 11:14:57.775070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.699 [2024-07-22 11:14:57.775081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.699 [2024-07-22 11:14:57.779492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.699 [2024-07-22 11:14:57.779522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.699 [2024-07-22 11:14:57.779532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.699 [2024-07-22 11:14:57.783963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.699 [2024-07-22 11:14:57.783992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.699 [2024-07-22 11:14:57.784003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.699 [2024-07-22 11:14:57.788395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.699 [2024-07-22 11:14:57.788426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.699 [2024-07-22 11:14:57.788437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.699 [2024-07-22 11:14:57.792786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.699 [2024-07-22 11:14:57.792819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.699 [2024-07-22 11:14:57.792829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.699 [2024-07-22 11:14:57.797280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.699 [2024-07-22 11:14:57.797312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.699 [2024-07-22 11:14:57.797323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.699 [2024-07-22 11:14:57.801827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.699 [2024-07-22 11:14:57.801871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.699 [2024-07-22 11:14:57.801882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.699 [2024-07-22 11:14:57.806353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.699 [2024-07-22 11:14:57.806389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.699 [2024-07-22 11:14:57.806399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.699 [2024-07-22 11:14:57.810855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.699 [2024-07-22 11:14:57.810881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.699 [2024-07-22 11:14:57.810891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.699 [2024-07-22 11:14:57.815359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.699 [2024-07-22 11:14:57.815391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.699 [2024-07-22 11:14:57.815401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.699 [2024-07-22 11:14:57.819791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.699 [2024-07-22 11:14:57.819823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.699 [2024-07-22 11:14:57.819833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.699 [2024-07-22 11:14:57.824248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.699 [2024-07-22 11:14:57.824278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.699 [2024-07-22 11:14:57.824289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.699 [2024-07-22 11:14:57.828678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.699 [2024-07-22 11:14:57.828710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.699 [2024-07-22 11:14:57.828721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.699 [2024-07-22 11:14:57.833128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.699 [2024-07-22 11:14:57.833159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.699 [2024-07-22 11:14:57.833170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.699 [2024-07-22 11:14:57.837674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.699 [2024-07-22 11:14:57.837705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.699 [2024-07-22 11:14:57.837716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.699 [2024-07-22 11:14:57.842144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.699 [2024-07-22 11:14:57.842175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.699 [2024-07-22 11:14:57.842186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.699 [2024-07-22 11:14:57.846592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.699 [2024-07-22 11:14:57.846623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.699 [2024-07-22 11:14:57.846634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.699 [2024-07-22 11:14:57.851272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.699 [2024-07-22 11:14:57.851303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.699 [2024-07-22 11:14:57.851313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.699 [2024-07-22 11:14:57.855815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.699 [2024-07-22 11:14:57.855858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.699 [2024-07-22 11:14:57.855869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:52.699 [2024-07-22 11:14:57.860285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.699 [2024-07-22 11:14:57.860315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.699 [2024-07-22 11:14:57.860326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:52.700 [2024-07-22 11:14:57.864876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.700 [2024-07-22 11:14:57.864904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.700 [2024-07-22 11:14:57.864915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:52.700 [2024-07-22 11:14:57.869308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08cf0) 01:17:52.700 [2024-07-22 11:14:57.869338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:52.700 [2024-07-22 11:14:57.869349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:52.700 01:17:52.700 Latency(us) 01:17:52.700 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:17:52.700 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 01:17:52.700 nvme0n1 : 2.00 6887.36 860.92 0.00 0.00 2320.53 2105.57 5632.41 01:17:52.700 =================================================================================================================== 01:17:52.700 Total : 6887.36 860.92 0.00 0.00 2320.53 2105.57 5632.41 01:17:52.700 0 01:17:52.700 11:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 01:17:52.700 11:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 01:17:52.700 11:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 01:17:52.700 | .driver_specific 01:17:52.700 | .nvme_error 01:17:52.700 | .status_code 01:17:52.700 | .command_transient_transport_error' 01:17:52.700 11:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 01:17:52.957 11:14:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 444 > 0 )) 01:17:52.957 11:14:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95212 01:17:52.957 11:14:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 95212 ']' 01:17:52.957 11:14:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 95212 01:17:52.957 11:14:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 01:17:52.957 11:14:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:17:52.957 11:14:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95212 01:17:52.957 killing process with pid 95212 01:17:52.957 Received shutdown signal, test time was about 2.000000 seconds 01:17:52.957 01:17:52.958 Latency(us) 01:17:52.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:17:52.958 =================================================================================================================== 01:17:52.958 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:17:52.958 11:14:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:17:52.958 11:14:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:17:52.958 11:14:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95212' 01:17:52.958 11:14:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 95212 01:17:52.958 11:14:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 95212 01:17:53.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:17:53.523 11:14:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 01:17:53.523 11:14:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 01:17:53.523 11:14:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 01:17:53.523 11:14:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 01:17:53.523 11:14:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 01:17:53.523 11:14:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95269 01:17:53.523 11:14:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95269 /var/tmp/bperf.sock 01:17:53.523 11:14:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 95269 ']' 01:17:53.523 11:14:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 01:17:53.523 11:14:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 01:17:53.523 11:14:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:17:53.523 11:14:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 01:17:53.523 11:14:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:17:53.523 11:14:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 01:17:53.523 [2024-07-22 11:14:58.478230] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:17:53.523 [2024-07-22 11:14:58.478305] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95269 ] 01:17:53.523 [2024-07-22 11:14:58.621401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:53.523 [2024-07-22 11:14:58.688479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:17:53.781 [2024-07-22 11:14:58.761550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:17:54.348 11:14:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:17:54.348 11:14:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 01:17:54.348 11:14:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:17:54.348 11:14:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:17:54.610 11:14:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 01:17:54.610 11:14:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:54.610 11:14:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:17:54.611 11:14:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:54.611 11:14:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:17:54.611 11:14:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:17:54.868 nvme0n1 01:17:54.868 11:14:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 01:17:54.868 11:14:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:54.868 11:14:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:17:54.868 11:14:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:54.868 11:14:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 01:17:54.868 11:14:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:17:54.868 Running I/O for 2 seconds... 01:17:54.868 [2024-07-22 11:15:00.040626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190fef90 01:17:54.868 [2024-07-22 11:15:00.042873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:54.868 [2024-07-22 11:15:00.042931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:54.868 [2024-07-22 11:15:00.053472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190feb58 01:17:54.868 [2024-07-22 11:15:00.055525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:54.868 [2024-07-22 11:15:00.055560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:17:54.868 [2024-07-22 11:15:00.066328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190fe2e8 01:17:54.868 [2024-07-22 11:15:00.068311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:54.868 [2024-07-22 11:15:00.068342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:17:55.126 [2024-07-22 11:15:00.078860] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190fda78 01:17:55.126 [2024-07-22 11:15:00.080823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.126 [2024-07-22 11:15:00.080861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:17:55.126 [2024-07-22 11:15:00.091604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190fd208 01:17:55.126 [2024-07-22 11:15:00.093595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.126 [2024-07-22 11:15:00.093625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:17:55.126 [2024-07-22 11:15:00.104136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190fc998 01:17:55.126 [2024-07-22 11:15:00.106071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.126 [2024-07-22 11:15:00.106100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:17:55.126 [2024-07-22 11:15:00.116547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190fc128 01:17:55.126 [2024-07-22 11:15:00.118469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.126 [2024-07-22 11:15:00.118499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:17:55.126 [2024-07-22 11:15:00.128977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190fb8b8 01:17:55.126 [2024-07-22 11:15:00.130873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.126 [2024-07-22 11:15:00.130903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:17:55.126 [2024-07-22 11:15:00.141340] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190fb048 01:17:55.126 [2024-07-22 11:15:00.143229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.126 [2024-07-22 11:15:00.143258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:17:55.126 [2024-07-22 11:15:00.153724] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190fa7d8 01:17:55.126 [2024-07-22 11:15:00.155595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.126 [2024-07-22 11:15:00.155625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:17:55.126 [2024-07-22 11:15:00.166133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f9f68 01:17:55.126 [2024-07-22 11:15:00.167973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.126 [2024-07-22 11:15:00.168000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:17:55.126 [2024-07-22 11:15:00.178665] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f96f8 01:17:55.126 [2024-07-22 11:15:00.180492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.126 [2024-07-22 11:15:00.180520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:17:55.126 [2024-07-22 11:15:00.191143] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f8e88 01:17:55.126 [2024-07-22 11:15:00.192938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.126 [2024-07-22 11:15:00.192967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:17:55.126 [2024-07-22 11:15:00.203636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f8618 01:17:55.126 [2024-07-22 11:15:00.205458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.126 [2024-07-22 11:15:00.205487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:17:55.126 [2024-07-22 11:15:00.216183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f7da8 01:17:55.126 [2024-07-22 11:15:00.218000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.126 [2024-07-22 11:15:00.218031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:17:55.126 [2024-07-22 11:15:00.228577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f7538 01:17:55.126 [2024-07-22 11:15:00.230431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.126 [2024-07-22 11:15:00.230460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:17:55.126 [2024-07-22 11:15:00.241268] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f6cc8 01:17:55.126 [2024-07-22 11:15:00.243137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.126 [2024-07-22 11:15:00.243166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:55.126 [2024-07-22 11:15:00.253943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f6458 01:17:55.126 [2024-07-22 11:15:00.255663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.126 [2024-07-22 11:15:00.255692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:17:55.126 [2024-07-22 11:15:00.266409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f5be8 01:17:55.126 [2024-07-22 11:15:00.268119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.126 [2024-07-22 11:15:00.268150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:17:55.126 [2024-07-22 11:15:00.279025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f5378 01:17:55.126 [2024-07-22 11:15:00.280712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.126 [2024-07-22 11:15:00.280742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:17:55.126 [2024-07-22 11:15:00.291483] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f4b08 01:17:55.126 [2024-07-22 11:15:00.293165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.126 [2024-07-22 11:15:00.293193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:17:55.126 [2024-07-22 11:15:00.304140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f4298 01:17:55.126 [2024-07-22 11:15:00.305820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.126 [2024-07-22 11:15:00.305862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:17:55.126 [2024-07-22 11:15:00.316548] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f3a28 01:17:55.126 [2024-07-22 11:15:00.318266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.126 [2024-07-22 11:15:00.318299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:17:55.126 [2024-07-22 11:15:00.328996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f31b8 01:17:55.126 [2024-07-22 11:15:00.330681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.126 [2024-07-22 11:15:00.330712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:17:55.384 [2024-07-22 11:15:00.341666] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f2948 01:17:55.384 [2024-07-22 11:15:00.343293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.384 [2024-07-22 11:15:00.343325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:17:55.384 [2024-07-22 11:15:00.354114] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f20d8 01:17:55.384 [2024-07-22 11:15:00.355746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.384 [2024-07-22 11:15:00.355779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:17:55.384 [2024-07-22 11:15:00.366793] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f1868 01:17:55.384 [2024-07-22 11:15:00.368392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.384 [2024-07-22 11:15:00.368425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:17:55.384 [2024-07-22 11:15:00.379590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f0ff8 01:17:55.384 [2024-07-22 11:15:00.381186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.384 [2024-07-22 11:15:00.381216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:17:55.384 [2024-07-22 11:15:00.392199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f0788 01:17:55.384 [2024-07-22 11:15:00.393760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.384 [2024-07-22 11:15:00.393791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:17:55.384 [2024-07-22 11:15:00.404694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190eff18 01:17:55.384 [2024-07-22 11:15:00.406287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.384 [2024-07-22 11:15:00.406315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:17:55.384 [2024-07-22 11:15:00.417110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190ef6a8 01:17:55.384 [2024-07-22 11:15:00.418637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.384 [2024-07-22 11:15:00.418668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:17:55.384 [2024-07-22 11:15:00.429499] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190eee38 01:17:55.384 [2024-07-22 11:15:00.431010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.384 [2024-07-22 11:15:00.431038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:17:55.384 [2024-07-22 11:15:00.441828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190ee5c8 01:17:55.384 [2024-07-22 11:15:00.443317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.384 [2024-07-22 11:15:00.443346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:55.385 [2024-07-22 11:15:00.454183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190edd58 01:17:55.385 [2024-07-22 11:15:00.455652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.385 [2024-07-22 11:15:00.455680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:17:55.385 [2024-07-22 11:15:00.466572] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190ed4e8 01:17:55.385 [2024-07-22 11:15:00.468044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.385 [2024-07-22 11:15:00.468073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:17:55.385 [2024-07-22 11:15:00.478968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190ecc78 01:17:55.385 [2024-07-22 11:15:00.480416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.385 [2024-07-22 11:15:00.480442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:17:55.385 [2024-07-22 11:15:00.491378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190ec408 01:17:55.385 [2024-07-22 11:15:00.492807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.385 [2024-07-22 11:15:00.492835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:17:55.385 [2024-07-22 11:15:00.503836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190ebb98 01:17:55.385 [2024-07-22 11:15:00.505275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.385 [2024-07-22 11:15:00.505304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:17:55.385 [2024-07-22 11:15:00.516300] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190eb328 01:17:55.385 [2024-07-22 11:15:00.517709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.385 [2024-07-22 11:15:00.517738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:17:55.385 [2024-07-22 11:15:00.528753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190eaab8 01:17:55.385 [2024-07-22 11:15:00.530169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.385 [2024-07-22 11:15:00.530199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:17:55.385 [2024-07-22 11:15:00.541267] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190ea248 01:17:55.385 [2024-07-22 11:15:00.542636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.385 [2024-07-22 11:15:00.542665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:17:55.385 [2024-07-22 11:15:00.553746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e99d8 01:17:55.385 [2024-07-22 11:15:00.555163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.385 [2024-07-22 11:15:00.555191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:17:55.385 [2024-07-22 11:15:00.566299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e9168 01:17:55.385 [2024-07-22 11:15:00.567629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.385 [2024-07-22 11:15:00.567658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:17:55.385 [2024-07-22 11:15:00.578694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e88f8 01:17:55.385 [2024-07-22 11:15:00.580027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.385 [2024-07-22 11:15:00.580055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:17:55.385 [2024-07-22 11:15:00.591121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e8088 01:17:55.385 [2024-07-22 11:15:00.592429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.385 [2024-07-22 11:15:00.592457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:17:55.642 [2024-07-22 11:15:00.603614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e7818 01:17:55.643 [2024-07-22 11:15:00.604930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.643 [2024-07-22 11:15:00.604958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:17:55.643 [2024-07-22 11:15:00.616049] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e6fa8 01:17:55.643 [2024-07-22 11:15:00.617327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.644 [2024-07-22 11:15:00.617357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:17:55.644 [2024-07-22 11:15:00.628527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e6738 01:17:55.644 [2024-07-22 11:15:00.629840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.644 [2024-07-22 11:15:00.629881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:17:55.644 [2024-07-22 11:15:00.640961] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e5ec8 01:17:55.644 [2024-07-22 11:15:00.642216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.644 [2024-07-22 11:15:00.642245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:55.644 [2024-07-22 11:15:00.653397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e5658 01:17:55.644 [2024-07-22 11:15:00.654672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.644 [2024-07-22 11:15:00.654702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:17:55.644 [2024-07-22 11:15:00.665894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e4de8 01:17:55.644 [2024-07-22 11:15:00.667099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.644 [2024-07-22 11:15:00.667127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:17:55.644 [2024-07-22 11:15:00.678269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e4578 01:17:55.644 [2024-07-22 11:15:00.679462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.644 [2024-07-22 11:15:00.679489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:17:55.644 [2024-07-22 11:15:00.690698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e3d08 01:17:55.644 [2024-07-22 11:15:00.691892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.644 [2024-07-22 11:15:00.691920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:17:55.644 [2024-07-22 11:15:00.703089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e3498 01:17:55.644 [2024-07-22 11:15:00.704252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.644 [2024-07-22 11:15:00.704280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:17:55.644 [2024-07-22 11:15:00.715419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e2c28 01:17:55.644 [2024-07-22 11:15:00.716577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.644 [2024-07-22 11:15:00.716605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:17:55.644 [2024-07-22 11:15:00.727754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e23b8 01:17:55.644 [2024-07-22 11:15:00.728901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.644 [2024-07-22 11:15:00.728928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:17:55.644 [2024-07-22 11:15:00.740138] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e1b48 01:17:55.644 [2024-07-22 11:15:00.741252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.644 [2024-07-22 11:15:00.741280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:17:55.644 [2024-07-22 11:15:00.752447] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e12d8 01:17:55.644 [2024-07-22 11:15:00.753554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.644 [2024-07-22 11:15:00.753583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:17:55.644 [2024-07-22 11:15:00.764869] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e0a68 01:17:55.644 [2024-07-22 11:15:00.765960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.644 [2024-07-22 11:15:00.765988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:17:55.644 [2024-07-22 11:15:00.777232] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e01f8 01:17:55.644 [2024-07-22 11:15:00.778316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.644 [2024-07-22 11:15:00.778344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:17:55.644 [2024-07-22 11:15:00.789607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190df988 01:17:55.644 [2024-07-22 11:15:00.790667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.644 [2024-07-22 11:15:00.790696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:17:55.644 [2024-07-22 11:15:00.802096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190df118 01:17:55.644 [2024-07-22 11:15:00.803189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.644 [2024-07-22 11:15:00.803218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:17:55.644 [2024-07-22 11:15:00.814781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190de8a8 01:17:55.644 [2024-07-22 11:15:00.815817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.644 [2024-07-22 11:15:00.815858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:17:55.644 [2024-07-22 11:15:00.827282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190de038 01:17:55.644 [2024-07-22 11:15:00.828299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.644 [2024-07-22 11:15:00.828327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:17:55.644 [2024-07-22 11:15:00.844833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190de038 01:17:55.644 [2024-07-22 11:15:00.846820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.644 [2024-07-22 11:15:00.846857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:55.903 [2024-07-22 11:15:00.857241] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190de8a8 01:17:55.903 [2024-07-22 11:15:00.859215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.903 [2024-07-22 11:15:00.859244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:17:55.903 [2024-07-22 11:15:00.869661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190df118 01:17:55.903 [2024-07-22 11:15:00.871623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.903 [2024-07-22 11:15:00.871652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:17:55.903 [2024-07-22 11:15:00.882087] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190df988 01:17:55.903 [2024-07-22 11:15:00.884022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.903 [2024-07-22 11:15:00.884051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:17:55.903 [2024-07-22 11:15:00.894513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e01f8 01:17:55.903 [2024-07-22 11:15:00.896439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.903 [2024-07-22 11:15:00.896467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:17:55.903 [2024-07-22 11:15:00.906905] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e0a68 01:17:55.903 [2024-07-22 11:15:00.908827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.903 [2024-07-22 11:15:00.908866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:17:55.903 [2024-07-22 11:15:00.919563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e12d8 01:17:55.903 [2024-07-22 11:15:00.921462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.903 [2024-07-22 11:15:00.921491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:17:55.903 [2024-07-22 11:15:00.932042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e1b48 01:17:55.903 [2024-07-22 11:15:00.933923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.903 [2024-07-22 11:15:00.933953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:17:55.903 [2024-07-22 11:15:00.944446] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e23b8 01:17:55.903 [2024-07-22 11:15:00.946315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.903 [2024-07-22 11:15:00.946344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:17:55.903 [2024-07-22 11:15:00.956831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e2c28 01:17:55.903 [2024-07-22 11:15:00.958756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.903 [2024-07-22 11:15:00.958785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:17:55.903 [2024-07-22 11:15:00.969284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e3498 01:17:55.903 [2024-07-22 11:15:00.971131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.903 [2024-07-22 11:15:00.971159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:17:55.903 [2024-07-22 11:15:00.981798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e3d08 01:17:55.903 [2024-07-22 11:15:00.983655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.903 [2024-07-22 11:15:00.983683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:17:55.903 [2024-07-22 11:15:00.994373] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e4578 01:17:55.903 [2024-07-22 11:15:00.996163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.903 [2024-07-22 11:15:00.996191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:17:55.903 [2024-07-22 11:15:01.006856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e4de8 01:17:55.903 [2024-07-22 11:15:01.008720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.903 [2024-07-22 11:15:01.008750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:17:55.903 [2024-07-22 11:15:01.019523] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e5658 01:17:55.903 [2024-07-22 11:15:01.021284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.903 [2024-07-22 11:15:01.021311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:17:55.903 [2024-07-22 11:15:01.031956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e5ec8 01:17:55.903 [2024-07-22 11:15:01.033727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.903 [2024-07-22 11:15:01.033757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:17:55.903 [2024-07-22 11:15:01.044513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e6738 01:17:55.903 [2024-07-22 11:15:01.046302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.903 [2024-07-22 11:15:01.046333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:17:55.903 [2024-07-22 11:15:01.057022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e6fa8 01:17:55.903 [2024-07-22 11:15:01.058788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.903 [2024-07-22 11:15:01.058817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:17:55.903 [2024-07-22 11:15:01.069570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e7818 01:17:55.903 [2024-07-22 11:15:01.071272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.903 [2024-07-22 11:15:01.071300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:17:55.903 [2024-07-22 11:15:01.082047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e8088 01:17:55.903 [2024-07-22 11:15:01.083800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.903 [2024-07-22 11:15:01.083829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:17:55.903 [2024-07-22 11:15:01.094600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e88f8 01:17:55.904 [2024-07-22 11:15:01.096280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.904 [2024-07-22 11:15:01.096308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:17:55.904 [2024-07-22 11:15:01.107110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e9168 01:17:55.904 [2024-07-22 11:15:01.108745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:55.904 [2024-07-22 11:15:01.108774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:17:56.161 [2024-07-22 11:15:01.119722] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190e99d8 01:17:56.161 [2024-07-22 11:15:01.121386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.161 [2024-07-22 11:15:01.121414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:17:56.161 [2024-07-22 11:15:01.132143] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190ea248 01:17:56.162 [2024-07-22 11:15:01.133807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.162 [2024-07-22 11:15:01.133835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:17:56.162 [2024-07-22 11:15:01.144658] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190eaab8 01:17:56.162 [2024-07-22 11:15:01.146291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.162 [2024-07-22 11:15:01.146337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:17:56.162 [2024-07-22 11:15:01.157131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190eb328 01:17:56.162 [2024-07-22 11:15:01.158727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.162 [2024-07-22 11:15:01.158755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:17:56.162 [2024-07-22 11:15:01.170138] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190ebb98 01:17:56.162 [2024-07-22 11:15:01.171740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.162 [2024-07-22 11:15:01.171768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:17:56.162 [2024-07-22 11:15:01.182754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190ec408 01:17:56.162 [2024-07-22 11:15:01.184349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.162 [2024-07-22 11:15:01.184377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:17:56.162 [2024-07-22 11:15:01.195196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190ecc78 01:17:56.162 [2024-07-22 11:15:01.196738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.162 [2024-07-22 11:15:01.196767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:17:56.162 [2024-07-22 11:15:01.207791] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190ed4e8 01:17:56.162 [2024-07-22 11:15:01.209336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.162 [2024-07-22 11:15:01.209365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:17:56.162 [2024-07-22 11:15:01.220356] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190edd58 01:17:56.162 [2024-07-22 11:15:01.221886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.162 [2024-07-22 11:15:01.221916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:17:56.162 [2024-07-22 11:15:01.232963] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190ee5c8 01:17:56.162 [2024-07-22 11:15:01.234576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.162 [2024-07-22 11:15:01.234605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:17:56.162 [2024-07-22 11:15:01.245542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190eee38 01:17:56.162 [2024-07-22 11:15:01.247039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.162 [2024-07-22 11:15:01.247068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:17:56.162 [2024-07-22 11:15:01.257926] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190ef6a8 01:17:56.162 [2024-07-22 11:15:01.259422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.162 [2024-07-22 11:15:01.259451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:17:56.162 [2024-07-22 11:15:01.270474] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190eff18 01:17:56.162 [2024-07-22 11:15:01.271951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.162 [2024-07-22 11:15:01.271980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:17:56.162 [2024-07-22 11:15:01.282949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f0788 01:17:56.162 [2024-07-22 11:15:01.284377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.162 [2024-07-22 11:15:01.284405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:17:56.162 [2024-07-22 11:15:01.295496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f0ff8 01:17:56.162 [2024-07-22 11:15:01.296913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.162 [2024-07-22 11:15:01.296941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:17:56.162 [2024-07-22 11:15:01.308106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f1868 01:17:56.162 [2024-07-22 11:15:01.309517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.162 [2024-07-22 11:15:01.309545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:17:56.162 [2024-07-22 11:15:01.320572] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f20d8 01:17:56.162 [2024-07-22 11:15:01.322006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.162 [2024-07-22 11:15:01.322034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:17:56.162 [2024-07-22 11:15:01.333171] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f2948 01:17:56.162 [2024-07-22 11:15:01.334642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.162 [2024-07-22 11:15:01.334669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:17:56.162 [2024-07-22 11:15:01.345924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f31b8 01:17:56.162 [2024-07-22 11:15:01.347282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.162 [2024-07-22 11:15:01.347311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:17:56.162 [2024-07-22 11:15:01.358577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f3a28 01:17:56.162 [2024-07-22 11:15:01.359924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.162 [2024-07-22 11:15:01.359953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:17:56.420 [2024-07-22 11:15:01.371256] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f4298 01:17:56.420 [2024-07-22 11:15:01.372578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.420 [2024-07-22 11:15:01.372606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:17:56.420 [2024-07-22 11:15:01.384076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f4b08 01:17:56.420 [2024-07-22 11:15:01.385483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.420 [2024-07-22 11:15:01.385515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:17:56.420 [2024-07-22 11:15:01.397310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f5378 01:17:56.420 [2024-07-22 11:15:01.398752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.420 [2024-07-22 11:15:01.398782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:17:56.420 [2024-07-22 11:15:01.410548] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f5be8 01:17:56.421 [2024-07-22 11:15:01.411826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.421 [2024-07-22 11:15:01.411866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:17:56.421 [2024-07-22 11:15:01.423388] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f6458 01:17:56.421 [2024-07-22 11:15:01.424694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.421 [2024-07-22 11:15:01.424723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:17:56.421 [2024-07-22 11:15:01.436278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f6cc8 01:17:56.421 [2024-07-22 11:15:01.437554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.421 [2024-07-22 11:15:01.437582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:17:56.421 [2024-07-22 11:15:01.448894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f7538 01:17:56.421 [2024-07-22 11:15:01.450198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.421 [2024-07-22 11:15:01.450228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:17:56.421 [2024-07-22 11:15:01.461414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f7da8 01:17:56.421 [2024-07-22 11:15:01.462645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.421 [2024-07-22 11:15:01.462675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:17:56.421 [2024-07-22 11:15:01.473873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f8618 01:17:56.421 [2024-07-22 11:15:01.475080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.421 [2024-07-22 11:15:01.475110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:17:56.421 [2024-07-22 11:15:01.486360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f8e88 01:17:56.421 [2024-07-22 11:15:01.487566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.421 [2024-07-22 11:15:01.487597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:17:56.421 [2024-07-22 11:15:01.498877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f96f8 01:17:56.421 [2024-07-22 11:15:01.500044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.421 [2024-07-22 11:15:01.500075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:17:56.421 [2024-07-22 11:15:01.511429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f9f68 01:17:56.421 [2024-07-22 11:15:01.512594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.421 [2024-07-22 11:15:01.512624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:17:56.421 [2024-07-22 11:15:01.523967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190fa7d8 01:17:56.421 [2024-07-22 11:15:01.525122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.421 [2024-07-22 11:15:01.525152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:17:56.421 [2024-07-22 11:15:01.536645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190fb048 01:17:56.421 [2024-07-22 11:15:01.537815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.421 [2024-07-22 11:15:01.537857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:17:56.421 [2024-07-22 11:15:01.549264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190fb8b8 01:17:56.421 [2024-07-22 11:15:01.550449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.421 [2024-07-22 11:15:01.550479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:17:56.421 [2024-07-22 11:15:01.561787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190fc128 01:17:56.421 [2024-07-22 11:15:01.562893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.421 [2024-07-22 11:15:01.562922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:17:56.421 [2024-07-22 11:15:01.574192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190fc998 01:17:56.421 [2024-07-22 11:15:01.575329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.421 [2024-07-22 11:15:01.575359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:17:56.421 [2024-07-22 11:15:01.586720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190fd208 01:17:56.421 [2024-07-22 11:15:01.587794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.421 [2024-07-22 11:15:01.587822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:17:56.421 [2024-07-22 11:15:01.599137] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190fda78 01:17:56.421 [2024-07-22 11:15:01.600186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.421 [2024-07-22 11:15:01.600214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:17:56.421 [2024-07-22 11:15:01.611660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190fe2e8 01:17:56.421 [2024-07-22 11:15:01.612706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.421 [2024-07-22 11:15:01.612734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:17:56.421 [2024-07-22 11:15:01.624209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190feb58 01:17:56.421 [2024-07-22 11:15:01.625225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.421 [2024-07-22 11:15:01.625254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:17:56.679 [2024-07-22 11:15:01.641840] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190fef90 01:17:56.679 [2024-07-22 11:15:01.643902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.679 [2024-07-22 11:15:01.643931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:56.679 [2024-07-22 11:15:01.654409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190feb58 01:17:56.679 [2024-07-22 11:15:01.656386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.679 [2024-07-22 11:15:01.656414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:17:56.679 [2024-07-22 11:15:01.666947] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190fe2e8 01:17:56.679 [2024-07-22 11:15:01.668903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.679 [2024-07-22 11:15:01.668932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:17:56.679 [2024-07-22 11:15:01.679627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190fda78 01:17:56.679 [2024-07-22 11:15:01.681576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.679 [2024-07-22 11:15:01.681604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:17:56.679 [2024-07-22 11:15:01.692072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190fd208 01:17:56.679 [2024-07-22 11:15:01.694005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.679 [2024-07-22 11:15:01.694034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:17:56.679 [2024-07-22 11:15:01.704564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190fc998 01:17:56.679 [2024-07-22 11:15:01.706527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.679 [2024-07-22 11:15:01.706555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:17:56.679 [2024-07-22 11:15:01.717095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190fc128 01:17:56.679 [2024-07-22 11:15:01.719002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.679 [2024-07-22 11:15:01.719030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:17:56.679 [2024-07-22 11:15:01.729517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190fb8b8 01:17:56.679 [2024-07-22 11:15:01.731401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.679 [2024-07-22 11:15:01.731427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:17:56.679 [2024-07-22 11:15:01.742059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190fb048 01:17:56.679 [2024-07-22 11:15:01.743940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.679 [2024-07-22 11:15:01.743968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:17:56.679 [2024-07-22 11:15:01.754501] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190fa7d8 01:17:56.679 [2024-07-22 11:15:01.756345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.679 [2024-07-22 11:15:01.756374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:17:56.679 [2024-07-22 11:15:01.766912] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f9f68 01:17:56.679 [2024-07-22 11:15:01.768725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.679 [2024-07-22 11:15:01.768753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:17:56.679 [2024-07-22 11:15:01.779366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f96f8 01:17:56.679 [2024-07-22 11:15:01.781176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.679 [2024-07-22 11:15:01.781206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:17:56.679 [2024-07-22 11:15:01.792011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f8e88 01:17:56.679 [2024-07-22 11:15:01.793813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.679 [2024-07-22 11:15:01.793842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:17:56.679 [2024-07-22 11:15:01.804612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f8618 01:17:56.679 [2024-07-22 11:15:01.806419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.679 [2024-07-22 11:15:01.806447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:17:56.679 [2024-07-22 11:15:01.817076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f7da8 01:17:56.679 [2024-07-22 11:15:01.818925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.679 [2024-07-22 11:15:01.818953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:17:56.679 [2024-07-22 11:15:01.829472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f7538 01:17:56.679 [2024-07-22 11:15:01.831236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.679 [2024-07-22 11:15:01.831263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:17:56.679 [2024-07-22 11:15:01.842076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f6cc8 01:17:56.679 [2024-07-22 11:15:01.843804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.679 [2024-07-22 11:15:01.843831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:56.679 [2024-07-22 11:15:01.854456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f6458 01:17:56.679 [2024-07-22 11:15:01.856183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.679 [2024-07-22 11:15:01.856211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:17:56.679 [2024-07-22 11:15:01.866914] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f5be8 01:17:56.679 [2024-07-22 11:15:01.868618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.679 [2024-07-22 11:15:01.868645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:17:56.679 [2024-07-22 11:15:01.879325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f5378 01:17:56.679 [2024-07-22 11:15:01.881024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.679 [2024-07-22 11:15:01.881051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:17:56.937 [2024-07-22 11:15:01.891659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f4b08 01:17:56.937 [2024-07-22 11:15:01.893348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.937 [2024-07-22 11:15:01.893376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:17:56.937 [2024-07-22 11:15:01.904250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f4298 01:17:56.937 [2024-07-22 11:15:01.905945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.937 [2024-07-22 11:15:01.905975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:17:56.937 [2024-07-22 11:15:01.916660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f3a28 01:17:56.937 [2024-07-22 11:15:01.918326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.937 [2024-07-22 11:15:01.918355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:17:56.937 [2024-07-22 11:15:01.929181] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f31b8 01:17:56.937 [2024-07-22 11:15:01.930810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.938 [2024-07-22 11:15:01.930839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:17:56.938 [2024-07-22 11:15:01.941647] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f2948 01:17:56.938 [2024-07-22 11:15:01.943264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.938 [2024-07-22 11:15:01.943292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:17:56.938 [2024-07-22 11:15:01.954062] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f20d8 01:17:56.938 [2024-07-22 11:15:01.955650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.938 [2024-07-22 11:15:01.955677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:17:56.938 [2024-07-22 11:15:01.966573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f1868 01:17:56.938 [2024-07-22 11:15:01.968156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.938 [2024-07-22 11:15:01.968184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:17:56.938 [2024-07-22 11:15:01.978936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f0ff8 01:17:56.938 [2024-07-22 11:15:01.980508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.938 [2024-07-22 11:15:01.980537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:17:56.938 [2024-07-22 11:15:01.991414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190f0788 01:17:56.938 [2024-07-22 11:15:01.992969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.938 [2024-07-22 11:15:01.992998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:17:56.938 [2024-07-22 11:15:02.004005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190eff18 01:17:56.938 [2024-07-22 11:15:02.005555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.938 [2024-07-22 11:15:02.005583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:17:56.938 [2024-07-22 11:15:02.016358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1d70) with pdu=0x2000190ef6a8 01:17:56.938 [2024-07-22 11:15:02.017898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:17:56.938 [2024-07-22 11:15:02.017926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:17:56.938 01:17:56.938 Latency(us) 01:17:56.938 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:17:56.938 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:17:56.938 nvme0n1 : 2.01 20186.71 78.85 0.00 0.00 6335.63 1750.26 25477.45 01:17:56.938 =================================================================================================================== 01:17:56.938 Total : 20186.71 78.85 0.00 0.00 6335.63 1750.26 25477.45 01:17:56.938 0 01:17:56.938 11:15:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 01:17:56.938 11:15:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 01:17:56.938 11:15:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 01:17:56.938 | .driver_specific 01:17:56.938 | .nvme_error 01:17:56.938 | .status_code 01:17:56.938 | .command_transient_transport_error' 01:17:56.938 11:15:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 01:17:57.195 11:15:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 158 > 0 )) 01:17:57.195 11:15:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95269 01:17:57.195 11:15:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 95269 ']' 01:17:57.195 11:15:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 95269 01:17:57.195 11:15:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 01:17:57.195 11:15:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:17:57.195 11:15:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95269 01:17:57.195 killing process with pid 95269 01:17:57.195 Received shutdown signal, test time was about 2.000000 seconds 01:17:57.195 01:17:57.195 Latency(us) 01:17:57.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:17:57.195 =================================================================================================================== 01:17:57.195 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:17:57.196 11:15:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:17:57.196 11:15:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:17:57.196 11:15:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95269' 01:17:57.196 11:15:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 95269 01:17:57.196 11:15:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 95269 01:17:57.453 11:15:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 01:17:57.453 11:15:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 01:17:57.453 11:15:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 01:17:57.453 11:15:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 01:17:57.453 11:15:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 01:17:57.453 11:15:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 01:17:57.453 11:15:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95327 01:17:57.453 11:15:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95327 /var/tmp/bperf.sock 01:17:57.453 11:15:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 95327 ']' 01:17:57.453 11:15:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 01:17:57.453 11:15:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 01:17:57.453 11:15:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:17:57.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:17:57.453 11:15:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 01:17:57.453 11:15:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:17:57.453 [2024-07-22 11:15:02.616963] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:17:57.453 [2024-07-22 11:15:02.617263] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 01:17:57.453 Zero copy mechanism will not be used. 01:17:57.453 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95327 ] 01:17:57.711 [2024-07-22 11:15:02.760157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:57.711 [2024-07-22 11:15:02.827104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:17:57.711 [2024-07-22 11:15:02.899508] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:17:58.275 11:15:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:17:58.275 11:15:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 01:17:58.275 11:15:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:17:58.275 11:15:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:17:58.533 11:15:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 01:17:58.533 11:15:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:58.533 11:15:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:17:58.533 11:15:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:58.533 11:15:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:17:58.533 11:15:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:17:58.792 nvme0n1 01:17:58.792 11:15:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 01:17:58.792 11:15:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:58.792 11:15:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:17:58.792 11:15:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:58.792 11:15:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 01:17:58.792 11:15:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:17:59.051 I/O size of 131072 is greater than zero copy threshold (65536). 01:17:59.051 Zero copy mechanism will not be used. 01:17:59.051 Running I/O for 2 seconds... 01:17:59.051 [2024-07-22 11:15:04.064958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.051 [2024-07-22 11:15:04.065488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.051 [2024-07-22 11:15:04.065529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.051 [2024-07-22 11:15:04.069534] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.051 [2024-07-22 11:15:04.069833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.051 [2024-07-22 11:15:04.070029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.051 [2024-07-22 11:15:04.074196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.051 [2024-07-22 11:15:04.074432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.051 [2024-07-22 11:15:04.074606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.051 [2024-07-22 11:15:04.078830] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.051 [2024-07-22 11:15:04.078907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.051 [2024-07-22 11:15:04.078929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.051 [2024-07-22 11:15:04.083238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.051 [2024-07-22 11:15:04.083303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.051 [2024-07-22 11:15:04.083323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.051 [2024-07-22 11:15:04.087622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.051 [2024-07-22 11:15:04.087711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.051 [2024-07-22 11:15:04.087731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.051 [2024-07-22 11:15:04.092051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.051 [2024-07-22 11:15:04.092131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.051 [2024-07-22 11:15:04.092151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.051 [2024-07-22 11:15:04.096427] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.051 [2024-07-22 11:15:04.096512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.051 [2024-07-22 11:15:04.096532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.051 [2024-07-22 11:15:04.100758] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.051 [2024-07-22 11:15:04.100821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.051 [2024-07-22 11:15:04.100840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.051 [2024-07-22 11:15:04.104615] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.051 [2024-07-22 11:15:04.105068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.051 [2024-07-22 11:15:04.105094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.051 [2024-07-22 11:15:04.108923] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.051 [2024-07-22 11:15:04.109015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.051 [2024-07-22 11:15:04.109035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.051 [2024-07-22 11:15:04.113194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.051 [2024-07-22 11:15:04.113256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.051 [2024-07-22 11:15:04.113275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.051 [2024-07-22 11:15:04.117436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.051 [2024-07-22 11:15:04.117515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.051 [2024-07-22 11:15:04.117534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.051 [2024-07-22 11:15:04.122025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.051 [2024-07-22 11:15:04.122090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.051 [2024-07-22 11:15:04.122110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.051 [2024-07-22 11:15:04.126660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.051 [2024-07-22 11:15:04.126723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.051 [2024-07-22 11:15:04.126743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.051 [2024-07-22 11:15:04.131228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.051 [2024-07-22 11:15:04.131294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.051 [2024-07-22 11:15:04.131314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.051 [2024-07-22 11:15:04.135821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.051 [2024-07-22 11:15:04.135900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.051 [2024-07-22 11:15:04.135920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.051 [2024-07-22 11:15:04.140304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.051 [2024-07-22 11:15:04.140384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.051 [2024-07-22 11:15:04.140403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.051 [2024-07-22 11:15:04.144917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.051 [2024-07-22 11:15:04.144979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.051 [2024-07-22 11:15:04.144998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.051 [2024-07-22 11:15:04.149400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.051 [2024-07-22 11:15:04.149493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.051 [2024-07-22 11:15:04.149513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.051 [2024-07-22 11:15:04.153904] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.051 [2024-07-22 11:15:04.153998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.051 [2024-07-22 11:15:04.154018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.051 [2024-07-22 11:15:04.158162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.051 [2024-07-22 11:15:04.158253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.051 [2024-07-22 11:15:04.158273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.051 [2024-07-22 11:15:04.162555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.052 [2024-07-22 11:15:04.162623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.052 [2024-07-22 11:15:04.162643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.052 [2024-07-22 11:15:04.166310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.052 [2024-07-22 11:15:04.166718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.052 [2024-07-22 11:15:04.166746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.052 [2024-07-22 11:15:04.170709] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.052 [2024-07-22 11:15:04.170804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.052 [2024-07-22 11:15:04.170823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.052 [2024-07-22 11:15:04.175068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.052 [2024-07-22 11:15:04.175136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.052 [2024-07-22 11:15:04.175156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.052 [2024-07-22 11:15:04.179541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.052 [2024-07-22 11:15:04.179607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.052 [2024-07-22 11:15:04.179626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.052 [2024-07-22 11:15:04.184016] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.052 [2024-07-22 11:15:04.184082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.052 [2024-07-22 11:15:04.184102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.052 [2024-07-22 11:15:04.188486] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.052 [2024-07-22 11:15:04.188553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.052 [2024-07-22 11:15:04.188573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.052 [2024-07-22 11:15:04.192931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.052 [2024-07-22 11:15:04.193000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.052 [2024-07-22 11:15:04.193019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.052 [2024-07-22 11:15:04.197399] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.052 [2024-07-22 11:15:04.197563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.052 [2024-07-22 11:15:04.197584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.052 [2024-07-22 11:15:04.201815] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.052 [2024-07-22 11:15:04.201990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.052 [2024-07-22 11:15:04.202009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.052 [2024-07-22 11:15:04.205683] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.052 [2024-07-22 11:15:04.206019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.052 [2024-07-22 11:15:04.206043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.052 [2024-07-22 11:15:04.209934] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.052 [2024-07-22 11:15:04.209999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.052 [2024-07-22 11:15:04.210019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.052 [2024-07-22 11:15:04.214284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.052 [2024-07-22 11:15:04.214351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.052 [2024-07-22 11:15:04.214371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.052 [2024-07-22 11:15:04.218707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.052 [2024-07-22 11:15:04.218772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.052 [2024-07-22 11:15:04.218792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.052 [2024-07-22 11:15:04.223231] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.052 [2024-07-22 11:15:04.223299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.052 [2024-07-22 11:15:04.223319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.052 [2024-07-22 11:15:04.227785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.052 [2024-07-22 11:15:04.227869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.052 [2024-07-22 11:15:04.227888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.052 [2024-07-22 11:15:04.232331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.052 [2024-07-22 11:15:04.232425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.052 [2024-07-22 11:15:04.232444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.052 [2024-07-22 11:15:04.236749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.052 [2024-07-22 11:15:04.236817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.052 [2024-07-22 11:15:04.236837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.052 [2024-07-22 11:15:04.241194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.052 [2024-07-22 11:15:04.241280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.052 [2024-07-22 11:15:04.241299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.052 [2024-07-22 11:15:04.245651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.052 [2024-07-22 11:15:04.245737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.052 [2024-07-22 11:15:04.245757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.052 [2024-07-22 11:15:04.250054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.052 [2024-07-22 11:15:04.250118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.052 [2024-07-22 11:15:04.250138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.052 [2024-07-22 11:15:04.253922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.052 [2024-07-22 11:15:04.254435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.052 [2024-07-22 11:15:04.254461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.052 [2024-07-22 11:15:04.258273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.052 [2024-07-22 11:15:04.258362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.052 [2024-07-22 11:15:04.258382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.262565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.262629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.262649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.266792] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.266875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.266895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.271249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.271322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.271342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.275731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.275810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.275829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.280138] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.280209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.280229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.284480] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.284677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.284697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.288793] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.288987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.289007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.292680] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.293036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.293068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.296869] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.296927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.296947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.301126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.301189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.301209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.305431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.305504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.305523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.309668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.309739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.309758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.314255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.314334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.314354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.318757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.318862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.318883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.323215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.323279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.323299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.327677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.327768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.327787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.332088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.332152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.332172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.335958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.336401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.336427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.340338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.340427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.340446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.344808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.344894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.344913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.349108] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.349177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.349197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.353521] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.353623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.353642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.358105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.358172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.358192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.362604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.362679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.362698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.367061] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.367126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.367147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.371387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.371499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.371519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.375684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.375841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.375872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.379536] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.379906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.379932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.383673] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.383748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.383767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.388111] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.388190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.388209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.392506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.392571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.392591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.396892] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.396955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.396974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.401369] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.401435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.401465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.405938] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.405999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.406018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.410538] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.410609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.410628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.415147] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.415220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.415239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.419865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.419929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.419947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.424294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.424368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.424388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.428599] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.428720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.428739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.432951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.433037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.433057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.437266] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.437330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.437349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.441224] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.441681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.441707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.445457] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.445553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.445572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.449764] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.449831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.449861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.311 [2024-07-22 11:15:04.454009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.311 [2024-07-22 11:15:04.454117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.311 [2024-07-22 11:15:04.454137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.312 [2024-07-22 11:15:04.458468] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.312 [2024-07-22 11:15:04.458539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.312 [2024-07-22 11:15:04.458571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.312 [2024-07-22 11:15:04.462959] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.312 [2024-07-22 11:15:04.463039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.312 [2024-07-22 11:15:04.463059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.312 [2024-07-22 11:15:04.467379] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.312 [2024-07-22 11:15:04.467444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.312 [2024-07-22 11:15:04.467465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.312 [2024-07-22 11:15:04.471928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.312 [2024-07-22 11:15:04.472113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.312 [2024-07-22 11:15:04.472134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.312 [2024-07-22 11:15:04.475810] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.312 [2024-07-22 11:15:04.476181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.312 [2024-07-22 11:15:04.476216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.312 [2024-07-22 11:15:04.480080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.312 [2024-07-22 11:15:04.480148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.312 [2024-07-22 11:15:04.480168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.312 [2024-07-22 11:15:04.484500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.312 [2024-07-22 11:15:04.484568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.312 [2024-07-22 11:15:04.484588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.312 [2024-07-22 11:15:04.488987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.312 [2024-07-22 11:15:04.489051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.312 [2024-07-22 11:15:04.489071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.312 [2024-07-22 11:15:04.493396] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.312 [2024-07-22 11:15:04.493465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.312 [2024-07-22 11:15:04.493484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.312 [2024-07-22 11:15:04.497910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.312 [2024-07-22 11:15:04.497976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.312 [2024-07-22 11:15:04.497996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.312 [2024-07-22 11:15:04.502466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.312 [2024-07-22 11:15:04.502529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.312 [2024-07-22 11:15:04.502549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.312 [2024-07-22 11:15:04.506816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.312 [2024-07-22 11:15:04.506922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.312 [2024-07-22 11:15:04.506942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.312 [2024-07-22 11:15:04.511124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.312 [2024-07-22 11:15:04.511210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.312 [2024-07-22 11:15:04.511230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.312 [2024-07-22 11:15:04.515423] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.312 [2024-07-22 11:15:04.515498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.312 [2024-07-22 11:15:04.515517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.571 [2024-07-22 11:15:04.519395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.571 [2024-07-22 11:15:04.519820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.571 [2024-07-22 11:15:04.519857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.571 [2024-07-22 11:15:04.523560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.571 [2024-07-22 11:15:04.523649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.571 [2024-07-22 11:15:04.523669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.571 [2024-07-22 11:15:04.528002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.571 [2024-07-22 11:15:04.528066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.571 [2024-07-22 11:15:04.528086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.571 [2024-07-22 11:15:04.532274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.571 [2024-07-22 11:15:04.532342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.571 [2024-07-22 11:15:04.532362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.571 [2024-07-22 11:15:04.536587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.571 [2024-07-22 11:15:04.536656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.571 [2024-07-22 11:15:04.536676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.571 [2024-07-22 11:15:04.541059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.571 [2024-07-22 11:15:04.541158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.571 [2024-07-22 11:15:04.541177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.571 [2024-07-22 11:15:04.545660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.571 [2024-07-22 11:15:04.545745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.571 [2024-07-22 11:15:04.545765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.571 [2024-07-22 11:15:04.550057] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.571 [2024-07-22 11:15:04.550165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.571 [2024-07-22 11:15:04.550186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.571 [2024-07-22 11:15:04.554465] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.571 [2024-07-22 11:15:04.554556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.571 [2024-07-22 11:15:04.554576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.571 [2024-07-22 11:15:04.558393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.571 [2024-07-22 11:15:04.558909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.571 [2024-07-22 11:15:04.558935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.571 [2024-07-22 11:15:04.562699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.571 [2024-07-22 11:15:04.562765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.571 [2024-07-22 11:15:04.562785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.571 [2024-07-22 11:15:04.567028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.571 [2024-07-22 11:15:04.567094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.571 [2024-07-22 11:15:04.567113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.571 [2024-07-22 11:15:04.571370] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.571 [2024-07-22 11:15:04.571434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.571 [2024-07-22 11:15:04.571453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.571 [2024-07-22 11:15:04.575790] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.571 [2024-07-22 11:15:04.575870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.571 [2024-07-22 11:15:04.575889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.571 [2024-07-22 11:15:04.580262] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.571 [2024-07-22 11:15:04.580326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.571 [2024-07-22 11:15:04.580346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.571 [2024-07-22 11:15:04.584637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.571 [2024-07-22 11:15:04.584791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.571 [2024-07-22 11:15:04.584811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.571 [2024-07-22 11:15:04.589083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.571 [2024-07-22 11:15:04.589166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.571 [2024-07-22 11:15:04.589185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.571 [2024-07-22 11:15:04.593493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.571 [2024-07-22 11:15:04.593568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.571 [2024-07-22 11:15:04.593588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.571 [2024-07-22 11:15:04.597262] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.571 [2024-07-22 11:15:04.597706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.571 [2024-07-22 11:15:04.597731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.571 [2024-07-22 11:15:04.601497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.571 [2024-07-22 11:15:04.601594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.571 [2024-07-22 11:15:04.601613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.571 [2024-07-22 11:15:04.605915] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.572 [2024-07-22 11:15:04.605995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.572 [2024-07-22 11:15:04.606018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.572 [2024-07-22 11:15:04.610223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.572 [2024-07-22 11:15:04.610293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.572 [2024-07-22 11:15:04.610313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.572 [2024-07-22 11:15:04.614676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.572 [2024-07-22 11:15:04.614746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.572 [2024-07-22 11:15:04.614765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.572 [2024-07-22 11:15:04.619081] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.572 [2024-07-22 11:15:04.619161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.572 [2024-07-22 11:15:04.619181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.572 [2024-07-22 11:15:04.623594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.572 [2024-07-22 11:15:04.623784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.572 [2024-07-22 11:15:04.623803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.572 [2024-07-22 11:15:04.627947] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.572 [2024-07-22 11:15:04.628113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.572 [2024-07-22 11:15:04.628132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.572 [2024-07-22 11:15:04.631823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.572 [2024-07-22 11:15:04.632196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.572 [2024-07-22 11:15:04.632218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.572 [2024-07-22 11:15:04.635935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.572 [2024-07-22 11:15:04.636000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.572 [2024-07-22 11:15:04.636019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.572 [2024-07-22 11:15:04.640380] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.572 [2024-07-22 11:15:04.640449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.572 [2024-07-22 11:15:04.640468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.572 [2024-07-22 11:15:04.644775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.572 [2024-07-22 11:15:04.644838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.572 [2024-07-22 11:15:04.644870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.572 [2024-07-22 11:15:04.649215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.572 [2024-07-22 11:15:04.649285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.572 [2024-07-22 11:15:04.649305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.572 [2024-07-22 11:15:04.653541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.572 [2024-07-22 11:15:04.653604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.572 [2024-07-22 11:15:04.653623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.572 [2024-07-22 11:15:04.657908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.572 [2024-07-22 11:15:04.657980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.572 [2024-07-22 11:15:04.657999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.572 [2024-07-22 11:15:04.662277] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.572 [2024-07-22 11:15:04.662344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.572 [2024-07-22 11:15:04.662364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.572 [2024-07-22 11:15:04.666597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.572 [2024-07-22 11:15:04.666762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.572 [2024-07-22 11:15:04.666782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.572 [2024-07-22 11:15:04.670568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.572 [2024-07-22 11:15:04.670948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.572 [2024-07-22 11:15:04.670973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.572 [2024-07-22 11:15:04.674800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.572 [2024-07-22 11:15:04.674886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.572 [2024-07-22 11:15:04.674919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.572 [2024-07-22 11:15:04.679282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.572 [2024-07-22 11:15:04.679354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.572 [2024-07-22 11:15:04.679374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.572 [2024-07-22 11:15:04.683695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.572 [2024-07-22 11:15:04.683761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.572 [2024-07-22 11:15:04.683780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.572 [2024-07-22 11:15:04.688338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.572 [2024-07-22 11:15:04.688409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.572 [2024-07-22 11:15:04.688428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.572 [2024-07-22 11:15:04.692729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.572 [2024-07-22 11:15:04.692809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.572 [2024-07-22 11:15:04.692829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.572 [2024-07-22 11:15:04.697245] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.572 [2024-07-22 11:15:04.697312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.572 [2024-07-22 11:15:04.697332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.572 [2024-07-22 11:15:04.701816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.572 [2024-07-22 11:15:04.701912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.572 [2024-07-22 11:15:04.701932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.572 [2024-07-22 11:15:04.706201] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.572 [2024-07-22 11:15:04.706330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.572 [2024-07-22 11:15:04.706350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.572 [2024-07-22 11:15:04.710563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.572 [2024-07-22 11:15:04.710762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.572 [2024-07-22 11:15:04.710780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.572 [2024-07-22 11:15:04.715153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.572 [2024-07-22 11:15:04.715331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.572 [2024-07-22 11:15:04.715350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.572 [2024-07-22 11:15:04.719131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.572 [2024-07-22 11:15:04.719467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.572 [2024-07-22 11:15:04.719492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.572 [2024-07-22 11:15:04.723299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.572 [2024-07-22 11:15:04.723367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.572 [2024-07-22 11:15:04.723387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.572 [2024-07-22 11:15:04.727612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.572 [2024-07-22 11:15:04.727678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.572 [2024-07-22 11:15:04.727697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.572 [2024-07-22 11:15:04.732065] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.573 [2024-07-22 11:15:04.732130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.573 [2024-07-22 11:15:04.732149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.573 [2024-07-22 11:15:04.736436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.573 [2024-07-22 11:15:04.736503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.573 [2024-07-22 11:15:04.736522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.573 [2024-07-22 11:15:04.740767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.573 [2024-07-22 11:15:04.740834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.573 [2024-07-22 11:15:04.740866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.573 [2024-07-22 11:15:04.745199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.573 [2024-07-22 11:15:04.745268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.573 [2024-07-22 11:15:04.745287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.573 [2024-07-22 11:15:04.749621] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.573 [2024-07-22 11:15:04.749701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.573 [2024-07-22 11:15:04.749720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.573 [2024-07-22 11:15:04.754112] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.573 [2024-07-22 11:15:04.754189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.573 [2024-07-22 11:15:04.754209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.573 [2024-07-22 11:15:04.757931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.573 [2024-07-22 11:15:04.758372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.573 [2024-07-22 11:15:04.758396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.573 [2024-07-22 11:15:04.762221] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.573 [2024-07-22 11:15:04.762310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.573 [2024-07-22 11:15:04.762328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.573 [2024-07-22 11:15:04.766446] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.573 [2024-07-22 11:15:04.766516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.573 [2024-07-22 11:15:04.766535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.573 [2024-07-22 11:15:04.770792] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.573 [2024-07-22 11:15:04.770897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.573 [2024-07-22 11:15:04.770916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.573 [2024-07-22 11:15:04.775248] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.573 [2024-07-22 11:15:04.775320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.573 [2024-07-22 11:15:04.775339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.831 [2024-07-22 11:15:04.779651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.832 [2024-07-22 11:15:04.779728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.832 [2024-07-22 11:15:04.779747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.832 [2024-07-22 11:15:04.784261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.832 [2024-07-22 11:15:04.784323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.832 [2024-07-22 11:15:04.784344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.832 [2024-07-22 11:15:04.788645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.832 [2024-07-22 11:15:04.788747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.832 [2024-07-22 11:15:04.788768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.832 [2024-07-22 11:15:04.793002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.832 [2024-07-22 11:15:04.793084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.832 [2024-07-22 11:15:04.793103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.832 [2024-07-22 11:15:04.796784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.832 [2024-07-22 11:15:04.797221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.832 [2024-07-22 11:15:04.797248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.832 [2024-07-22 11:15:04.800988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.832 [2024-07-22 11:15:04.801088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.832 [2024-07-22 11:15:04.801107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.832 [2024-07-22 11:15:04.805276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.832 [2024-07-22 11:15:04.805344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.832 [2024-07-22 11:15:04.805363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.832 [2024-07-22 11:15:04.809585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.832 [2024-07-22 11:15:04.809655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.832 [2024-07-22 11:15:04.809675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.832 [2024-07-22 11:15:04.814062] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.832 [2024-07-22 11:15:04.814128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.832 [2024-07-22 11:15:04.814148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.832 [2024-07-22 11:15:04.818519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.832 [2024-07-22 11:15:04.818607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.832 [2024-07-22 11:15:04.818626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.832 [2024-07-22 11:15:04.823021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.832 [2024-07-22 11:15:04.823146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.832 [2024-07-22 11:15:04.823165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.832 [2024-07-22 11:15:04.827397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.832 [2024-07-22 11:15:04.827553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.832 [2024-07-22 11:15:04.827574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.832 [2024-07-22 11:15:04.831198] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.832 [2024-07-22 11:15:04.831571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.832 [2024-07-22 11:15:04.831599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.832 [2024-07-22 11:15:04.835459] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.832 [2024-07-22 11:15:04.835525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.832 [2024-07-22 11:15:04.835545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.832 [2024-07-22 11:15:04.839984] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.832 [2024-07-22 11:15:04.840048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.832 [2024-07-22 11:15:04.840068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.832 [2024-07-22 11:15:04.844399] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.832 [2024-07-22 11:15:04.844466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.832 [2024-07-22 11:15:04.844486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.832 [2024-07-22 11:15:04.848802] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.832 [2024-07-22 11:15:04.848919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.832 [2024-07-22 11:15:04.848938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.832 [2024-07-22 11:15:04.853201] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.832 [2024-07-22 11:15:04.853266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.832 [2024-07-22 11:15:04.853285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.832 [2024-07-22 11:15:04.857583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.832 [2024-07-22 11:15:04.857701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.832 [2024-07-22 11:15:04.857720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.832 [2024-07-22 11:15:04.861988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.832 [2024-07-22 11:15:04.862092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.832 [2024-07-22 11:15:04.862113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.832 [2024-07-22 11:15:04.866410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.832 [2024-07-22 11:15:04.866497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.832 [2024-07-22 11:15:04.866530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.832 [2024-07-22 11:15:04.870349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.832 [2024-07-22 11:15:04.870815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.832 [2024-07-22 11:15:04.870841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.832 [2024-07-22 11:15:04.874707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.832 [2024-07-22 11:15:04.874798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.832 [2024-07-22 11:15:04.874818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.832 [2024-07-22 11:15:04.878956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.832 [2024-07-22 11:15:04.879025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.832 [2024-07-22 11:15:04.879045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.832 [2024-07-22 11:15:04.883153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.832 [2024-07-22 11:15:04.883228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.832 [2024-07-22 11:15:04.883248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.832 [2024-07-22 11:15:04.887665] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.832 [2024-07-22 11:15:04.887771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.832 [2024-07-22 11:15:04.887792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.832 [2024-07-22 11:15:04.892031] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.832 [2024-07-22 11:15:04.892169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.832 [2024-07-22 11:15:04.892191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.832 [2024-07-22 11:15:04.896361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.832 [2024-07-22 11:15:04.896524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.832 [2024-07-22 11:15:04.896544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.832 [2024-07-22 11:15:04.900810] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.832 [2024-07-22 11:15:04.900984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.832 [2024-07-22 11:15:04.901005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.832 [2024-07-22 11:15:04.904756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.833 [2024-07-22 11:15:04.905121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.833 [2024-07-22 11:15:04.905148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.833 [2024-07-22 11:15:04.908980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.833 [2024-07-22 11:15:04.909041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.833 [2024-07-22 11:15:04.909061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.833 [2024-07-22 11:15:04.913314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.833 [2024-07-22 11:15:04.913387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.833 [2024-07-22 11:15:04.913407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.833 [2024-07-22 11:15:04.917745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.833 [2024-07-22 11:15:04.917810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.833 [2024-07-22 11:15:04.917830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.833 [2024-07-22 11:15:04.922140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.833 [2024-07-22 11:15:04.922211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.833 [2024-07-22 11:15:04.922231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.833 [2024-07-22 11:15:04.926560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.833 [2024-07-22 11:15:04.926666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.833 [2024-07-22 11:15:04.926686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.833 [2024-07-22 11:15:04.930986] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.833 [2024-07-22 11:15:04.931142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.833 [2024-07-22 11:15:04.931163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.833 [2024-07-22 11:15:04.935288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.833 [2024-07-22 11:15:04.935375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.833 [2024-07-22 11:15:04.935395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.833 [2024-07-22 11:15:04.939613] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.833 [2024-07-22 11:15:04.939676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.833 [2024-07-22 11:15:04.939696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.833 [2024-07-22 11:15:04.943646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.833 [2024-07-22 11:15:04.944103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.833 [2024-07-22 11:15:04.944129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.833 [2024-07-22 11:15:04.947964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.833 [2024-07-22 11:15:04.948053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.833 [2024-07-22 11:15:04.948073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.833 [2024-07-22 11:15:04.952381] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.833 [2024-07-22 11:15:04.952445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.833 [2024-07-22 11:15:04.952464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.833 [2024-07-22 11:15:04.956761] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.833 [2024-07-22 11:15:04.956833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.833 [2024-07-22 11:15:04.956865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.833 [2024-07-22 11:15:04.961428] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.833 [2024-07-22 11:15:04.961509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.833 [2024-07-22 11:15:04.961529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.833 [2024-07-22 11:15:04.965792] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.833 [2024-07-22 11:15:04.965878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.833 [2024-07-22 11:15:04.965898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.833 [2024-07-22 11:15:04.970230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.833 [2024-07-22 11:15:04.970304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.833 [2024-07-22 11:15:04.970340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.833 [2024-07-22 11:15:04.974678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.833 [2024-07-22 11:15:04.974800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.833 [2024-07-22 11:15:04.974819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.833 [2024-07-22 11:15:04.979019] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.833 [2024-07-22 11:15:04.979195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.833 [2024-07-22 11:15:04.979214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.833 [2024-07-22 11:15:04.982930] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.833 [2024-07-22 11:15:04.983269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.833 [2024-07-22 11:15:04.983294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.833 [2024-07-22 11:15:04.987154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.833 [2024-07-22 11:15:04.987217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.833 [2024-07-22 11:15:04.987237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.833 [2024-07-22 11:15:04.991495] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.833 [2024-07-22 11:15:04.991570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.833 [2024-07-22 11:15:04.991589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.833 [2024-07-22 11:15:04.995863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.833 [2024-07-22 11:15:04.995925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.833 [2024-07-22 11:15:04.995945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.833 [2024-07-22 11:15:05.000389] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.833 [2024-07-22 11:15:05.000459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.833 [2024-07-22 11:15:05.000479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.833 [2024-07-22 11:15:05.004770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.833 [2024-07-22 11:15:05.004836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.833 [2024-07-22 11:15:05.004867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.833 [2024-07-22 11:15:05.009332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.833 [2024-07-22 11:15:05.009414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.833 [2024-07-22 11:15:05.009433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.833 [2024-07-22 11:15:05.013698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.833 [2024-07-22 11:15:05.013764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.833 [2024-07-22 11:15:05.013784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.833 [2024-07-22 11:15:05.018059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.833 [2024-07-22 11:15:05.018216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.833 [2024-07-22 11:15:05.018236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:17:59.833 [2024-07-22 11:15:05.021909] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.833 [2024-07-22 11:15:05.022246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.833 [2024-07-22 11:15:05.022272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:17:59.833 [2024-07-22 11:15:05.026133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.833 [2024-07-22 11:15:05.026194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.833 [2024-07-22 11:15:05.026214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:17:59.833 [2024-07-22 11:15:05.030526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.834 [2024-07-22 11:15:05.030597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.834 [2024-07-22 11:15:05.030629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:17:59.834 [2024-07-22 11:15:05.034969] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:17:59.834 [2024-07-22 11:15:05.035050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:59.834 [2024-07-22 11:15:05.035070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.092 [2024-07-22 11:15:05.039320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.093 [2024-07-22 11:15:05.039397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.093 [2024-07-22 11:15:05.039417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.093 [2024-07-22 11:15:05.043807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.093 [2024-07-22 11:15:05.043913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.093 [2024-07-22 11:15:05.043933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.093 [2024-07-22 11:15:05.048141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.093 [2024-07-22 11:15:05.048278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.093 [2024-07-22 11:15:05.048298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.093 [2024-07-22 11:15:05.052518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.093 [2024-07-22 11:15:05.052602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.093 [2024-07-22 11:15:05.052622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.093 [2024-07-22 11:15:05.056860] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.093 [2024-07-22 11:15:05.056927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.093 [2024-07-22 11:15:05.056947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.093 [2024-07-22 11:15:05.060688] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.093 [2024-07-22 11:15:05.061145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.093 [2024-07-22 11:15:05.061172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.093 [2024-07-22 11:15:05.064970] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.093 [2024-07-22 11:15:05.065064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.093 [2024-07-22 11:15:05.065084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.093 [2024-07-22 11:15:05.069363] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.093 [2024-07-22 11:15:05.069427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.093 [2024-07-22 11:15:05.069457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.093 [2024-07-22 11:15:05.073828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.093 [2024-07-22 11:15:05.073925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.093 [2024-07-22 11:15:05.073973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.093 [2024-07-22 11:15:05.078478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.093 [2024-07-22 11:15:05.078563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.093 [2024-07-22 11:15:05.078582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.093 [2024-07-22 11:15:05.082964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.093 [2024-07-22 11:15:05.083028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.093 [2024-07-22 11:15:05.083048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.093 [2024-07-22 11:15:05.087515] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.093 [2024-07-22 11:15:05.087587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.093 [2024-07-22 11:15:05.087606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.093 [2024-07-22 11:15:05.091994] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.093 [2024-07-22 11:15:05.092071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.093 [2024-07-22 11:15:05.092091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.093 [2024-07-22 11:15:05.096575] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.093 [2024-07-22 11:15:05.096638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.093 [2024-07-22 11:15:05.096657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.093 [2024-07-22 11:15:05.101151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.093 [2024-07-22 11:15:05.101218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.093 [2024-07-22 11:15:05.101237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.093 [2024-07-22 11:15:05.105723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.093 [2024-07-22 11:15:05.105803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.093 [2024-07-22 11:15:05.105823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.093 [2024-07-22 11:15:05.110382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.093 [2024-07-22 11:15:05.110447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.093 [2024-07-22 11:15:05.110466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.093 [2024-07-22 11:15:05.115052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.093 [2024-07-22 11:15:05.115139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.093 [2024-07-22 11:15:05.115158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.093 [2024-07-22 11:15:05.119720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.093 [2024-07-22 11:15:05.119784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.093 [2024-07-22 11:15:05.119804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.093 [2024-07-22 11:15:05.124320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.093 [2024-07-22 11:15:05.124394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.093 [2024-07-22 11:15:05.124413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.093 [2024-07-22 11:15:05.128904] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.093 [2024-07-22 11:15:05.128986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.093 [2024-07-22 11:15:05.129006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.093 [2024-07-22 11:15:05.133341] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.093 [2024-07-22 11:15:05.133407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.093 [2024-07-22 11:15:05.133426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.093 [2024-07-22 11:15:05.137778] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.093 [2024-07-22 11:15:05.137962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.093 [2024-07-22 11:15:05.137983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.093 [2024-07-22 11:15:05.142225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.093 [2024-07-22 11:15:05.142294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.093 [2024-07-22 11:15:05.142331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.093 [2024-07-22 11:15:05.146599] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.093 [2024-07-22 11:15:05.146668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.093 [2024-07-22 11:15:05.146687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.093 [2024-07-22 11:15:05.150452] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.093 [2024-07-22 11:15:05.150889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.093 [2024-07-22 11:15:05.150914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.093 [2024-07-22 11:15:05.154647] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.093 [2024-07-22 11:15:05.154738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.093 [2024-07-22 11:15:05.154758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.093 [2024-07-22 11:15:05.159033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.093 [2024-07-22 11:15:05.159101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.093 [2024-07-22 11:15:05.159121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.093 [2024-07-22 11:15:05.163386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.093 [2024-07-22 11:15:05.163449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.093 [2024-07-22 11:15:05.163469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.094 [2024-07-22 11:15:05.167787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.094 [2024-07-22 11:15:05.167862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.094 [2024-07-22 11:15:05.167882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.094 [2024-07-22 11:15:05.172189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.094 [2024-07-22 11:15:05.172254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.094 [2024-07-22 11:15:05.172274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.094 [2024-07-22 11:15:05.176528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.094 [2024-07-22 11:15:05.176597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.094 [2024-07-22 11:15:05.176616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.094 [2024-07-22 11:15:05.180866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.094 [2024-07-22 11:15:05.180952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.094 [2024-07-22 11:15:05.180972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.094 [2024-07-22 11:15:05.185205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.094 [2024-07-22 11:15:05.185270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.094 [2024-07-22 11:15:05.185289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.094 [2024-07-22 11:15:05.189036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.094 [2024-07-22 11:15:05.189480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.094 [2024-07-22 11:15:05.189505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.094 [2024-07-22 11:15:05.193301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.094 [2024-07-22 11:15:05.193391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.094 [2024-07-22 11:15:05.193411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.094 [2024-07-22 11:15:05.197668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.094 [2024-07-22 11:15:05.197732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.094 [2024-07-22 11:15:05.197751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.094 [2024-07-22 11:15:05.202016] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.094 [2024-07-22 11:15:05.202085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.094 [2024-07-22 11:15:05.202105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.094 [2024-07-22 11:15:05.206352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.094 [2024-07-22 11:15:05.206419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.094 [2024-07-22 11:15:05.206439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.094 [2024-07-22 11:15:05.210725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.094 [2024-07-22 11:15:05.210823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.094 [2024-07-22 11:15:05.210843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.094 [2024-07-22 11:15:05.215197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.094 [2024-07-22 11:15:05.215286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.094 [2024-07-22 11:15:05.215305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.094 [2024-07-22 11:15:05.219527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.094 [2024-07-22 11:15:05.219689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.094 [2024-07-22 11:15:05.219708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.094 [2024-07-22 11:15:05.223498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.094 [2024-07-22 11:15:05.223863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.094 [2024-07-22 11:15:05.223889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.094 [2024-07-22 11:15:05.227722] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.094 [2024-07-22 11:15:05.227789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.094 [2024-07-22 11:15:05.227808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.094 [2024-07-22 11:15:05.232140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.094 [2024-07-22 11:15:05.232207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.094 [2024-07-22 11:15:05.232226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.094 [2024-07-22 11:15:05.236430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.094 [2024-07-22 11:15:05.236501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.094 [2024-07-22 11:15:05.236521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.094 [2024-07-22 11:15:05.240730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.094 [2024-07-22 11:15:05.240793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.094 [2024-07-22 11:15:05.240813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.094 [2024-07-22 11:15:05.245179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.094 [2024-07-22 11:15:05.245245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.094 [2024-07-22 11:15:05.245264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.094 [2024-07-22 11:15:05.249479] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.094 [2024-07-22 11:15:05.249603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.094 [2024-07-22 11:15:05.249622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.094 [2024-07-22 11:15:05.254060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.094 [2024-07-22 11:15:05.254195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.094 [2024-07-22 11:15:05.254215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.094 [2024-07-22 11:15:05.258576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.094 [2024-07-22 11:15:05.258741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.094 [2024-07-22 11:15:05.258760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.094 [2024-07-22 11:15:05.263020] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.094 [2024-07-22 11:15:05.263374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.094 [2024-07-22 11:15:05.263400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.094 [2024-07-22 11:15:05.267246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.094 [2024-07-22 11:15:05.267313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.094 [2024-07-22 11:15:05.267333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.094 [2024-07-22 11:15:05.271537] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.094 [2024-07-22 11:15:05.271603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.094 [2024-07-22 11:15:05.271622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.094 [2024-07-22 11:15:05.275927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.094 [2024-07-22 11:15:05.275988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.094 [2024-07-22 11:15:05.276007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.094 [2024-07-22 11:15:05.280332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.094 [2024-07-22 11:15:05.280398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.094 [2024-07-22 11:15:05.280417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.094 [2024-07-22 11:15:05.284707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.094 [2024-07-22 11:15:05.284773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.094 [2024-07-22 11:15:05.284792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.094 [2024-07-22 11:15:05.289125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.094 [2024-07-22 11:15:05.289211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.094 [2024-07-22 11:15:05.289230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.094 [2024-07-22 11:15:05.293549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.095 [2024-07-22 11:15:05.293652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.095 [2024-07-22 11:15:05.293671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.095 [2024-07-22 11:15:05.298005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.095 [2024-07-22 11:15:05.298170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.095 [2024-07-22 11:15:05.298189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.354 [2024-07-22 11:15:05.301841] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.354 [2024-07-22 11:15:05.302201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.354 [2024-07-22 11:15:05.302236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.354 [2024-07-22 11:15:05.306036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.354 [2024-07-22 11:15:05.306098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.354 [2024-07-22 11:15:05.306117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.354 [2024-07-22 11:15:05.310306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.354 [2024-07-22 11:15:05.310372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.354 [2024-07-22 11:15:05.310392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.354 [2024-07-22 11:15:05.314701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.354 [2024-07-22 11:15:05.314764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.354 [2024-07-22 11:15:05.314784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.354 [2024-07-22 11:15:05.319125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.354 [2024-07-22 11:15:05.319197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.354 [2024-07-22 11:15:05.319217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.354 [2024-07-22 11:15:05.323605] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.354 [2024-07-22 11:15:05.323678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.354 [2024-07-22 11:15:05.323698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.354 [2024-07-22 11:15:05.328229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.354 [2024-07-22 11:15:05.328440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.354 [2024-07-22 11:15:05.328459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.354 [2024-07-22 11:15:05.332726] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.354 [2024-07-22 11:15:05.332813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.354 [2024-07-22 11:15:05.332832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.355 [2024-07-22 11:15:05.337004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.355 [2024-07-22 11:15:05.337202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.355 [2024-07-22 11:15:05.337221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.355 [2024-07-22 11:15:05.341217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.355 [2024-07-22 11:15:05.341387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.355 [2024-07-22 11:15:05.341406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.355 [2024-07-22 11:15:05.345181] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.355 [2024-07-22 11:15:05.345546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.355 [2024-07-22 11:15:05.345573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.355 [2024-07-22 11:15:05.349386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.355 [2024-07-22 11:15:05.349457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.355 [2024-07-22 11:15:05.349476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.355 [2024-07-22 11:15:05.353795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.355 [2024-07-22 11:15:05.353873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.355 [2024-07-22 11:15:05.353893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.355 [2024-07-22 11:15:05.358133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.355 [2024-07-22 11:15:05.358198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.355 [2024-07-22 11:15:05.358217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.355 [2024-07-22 11:15:05.362328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.355 [2024-07-22 11:15:05.362402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.355 [2024-07-22 11:15:05.362421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.355 [2024-07-22 11:15:05.366780] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.355 [2024-07-22 11:15:05.366844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.355 [2024-07-22 11:15:05.366863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.355 [2024-07-22 11:15:05.371119] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.355 [2024-07-22 11:15:05.371278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.355 [2024-07-22 11:15:05.371297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.355 [2024-07-22 11:15:05.375406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.355 [2024-07-22 11:15:05.375492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.355 [2024-07-22 11:15:05.375511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.355 [2024-07-22 11:15:05.379944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.355 [2024-07-22 11:15:05.380128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.355 [2024-07-22 11:15:05.380148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.355 [2024-07-22 11:15:05.383967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.355 [2024-07-22 11:15:05.384332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.355 [2024-07-22 11:15:05.384358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.355 [2024-07-22 11:15:05.388283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.355 [2024-07-22 11:15:05.388349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.355 [2024-07-22 11:15:05.388368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.355 [2024-07-22 11:15:05.392553] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.355 [2024-07-22 11:15:05.392616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.355 [2024-07-22 11:15:05.392635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.355 [2024-07-22 11:15:05.396965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.355 [2024-07-22 11:15:05.397029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.355 [2024-07-22 11:15:05.397049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.355 [2024-07-22 11:15:05.401303] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.355 [2024-07-22 11:15:05.401398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.355 [2024-07-22 11:15:05.401417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.355 [2024-07-22 11:15:05.405703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.355 [2024-07-22 11:15:05.405774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.355 [2024-07-22 11:15:05.405793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.355 [2024-07-22 11:15:05.410139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.355 [2024-07-22 11:15:05.410206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.355 [2024-07-22 11:15:05.410227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.355 [2024-07-22 11:15:05.414557] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.355 [2024-07-22 11:15:05.414642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.355 [2024-07-22 11:15:05.414661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.355 [2024-07-22 11:15:05.418890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.355 [2024-07-22 11:15:05.419084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.355 [2024-07-22 11:15:05.419103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.355 [2024-07-22 11:15:05.422821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.355 [2024-07-22 11:15:05.423215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.355 [2024-07-22 11:15:05.423246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.355 [2024-07-22 11:15:05.427200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.355 [2024-07-22 11:15:05.427262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.355 [2024-07-22 11:15:05.427281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.355 [2024-07-22 11:15:05.431493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.355 [2024-07-22 11:15:05.431563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.355 [2024-07-22 11:15:05.431582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.355 [2024-07-22 11:15:05.435907] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.355 [2024-07-22 11:15:05.435977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.355 [2024-07-22 11:15:05.435995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.355 [2024-07-22 11:15:05.440431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.355 [2024-07-22 11:15:05.440494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.355 [2024-07-22 11:15:05.440513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.355 [2024-07-22 11:15:05.444743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.355 [2024-07-22 11:15:05.444814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.355 [2024-07-22 11:15:05.444834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.355 [2024-07-22 11:15:05.449252] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.355 [2024-07-22 11:15:05.449318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.355 [2024-07-22 11:15:05.449337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.355 [2024-07-22 11:15:05.453588] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.355 [2024-07-22 11:15:05.453685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.355 [2024-07-22 11:15:05.453705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.355 [2024-07-22 11:15:05.458044] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.355 [2024-07-22 11:15:05.458204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.356 [2024-07-22 11:15:05.458223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.356 [2024-07-22 11:15:05.461912] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.356 [2024-07-22 11:15:05.462263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.356 [2024-07-22 11:15:05.462288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.356 [2024-07-22 11:15:05.466064] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.356 [2024-07-22 11:15:05.466129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.356 [2024-07-22 11:15:05.466149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.356 [2024-07-22 11:15:05.470374] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.356 [2024-07-22 11:15:05.470436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.356 [2024-07-22 11:15:05.470456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.356 [2024-07-22 11:15:05.474686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.356 [2024-07-22 11:15:05.474752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.356 [2024-07-22 11:15:05.474771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.356 [2024-07-22 11:15:05.479102] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.356 [2024-07-22 11:15:05.479197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.356 [2024-07-22 11:15:05.479216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.356 [2024-07-22 11:15:05.483541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.356 [2024-07-22 11:15:05.483603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.356 [2024-07-22 11:15:05.483622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.356 [2024-07-22 11:15:05.488073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.356 [2024-07-22 11:15:05.488167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.356 [2024-07-22 11:15:05.488186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.356 [2024-07-22 11:15:05.492410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.356 [2024-07-22 11:15:05.492500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.356 [2024-07-22 11:15:05.492520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.356 [2024-07-22 11:15:05.496770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.356 [2024-07-22 11:15:05.496838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.356 [2024-07-22 11:15:05.496871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.356 [2024-07-22 11:15:05.500644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.356 [2024-07-22 11:15:05.501110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.356 [2024-07-22 11:15:05.501346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.356 [2024-07-22 11:15:05.505318] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.356 [2024-07-22 11:15:05.505411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.356 [2024-07-22 11:15:05.505432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.356 [2024-07-22 11:15:05.509766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.356 [2024-07-22 11:15:05.509841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.356 [2024-07-22 11:15:05.509863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.356 [2024-07-22 11:15:05.514123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.356 [2024-07-22 11:15:05.514193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.356 [2024-07-22 11:15:05.514214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.356 [2024-07-22 11:15:05.518643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.356 [2024-07-22 11:15:05.518712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.356 [2024-07-22 11:15:05.518731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.356 [2024-07-22 11:15:05.523081] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.356 [2024-07-22 11:15:05.523181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.356 [2024-07-22 11:15:05.523200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.356 [2024-07-22 11:15:05.527392] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.356 [2024-07-22 11:15:05.527591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.356 [2024-07-22 11:15:05.527611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.356 [2024-07-22 11:15:05.531792] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.356 [2024-07-22 11:15:05.531947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.356 [2024-07-22 11:15:05.531968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.356 [2024-07-22 11:15:05.535602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.356 [2024-07-22 11:15:05.535956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.356 [2024-07-22 11:15:05.535981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.356 [2024-07-22 11:15:05.540094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.356 [2024-07-22 11:15:05.540570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.356 [2024-07-22 11:15:05.540597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.356 [2024-07-22 11:15:05.544454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.356 [2024-07-22 11:15:05.544546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.356 [2024-07-22 11:15:05.544565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.356 [2024-07-22 11:15:05.548824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.356 [2024-07-22 11:15:05.548911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.356 [2024-07-22 11:15:05.548931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.356 [2024-07-22 11:15:05.553267] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.356 [2024-07-22 11:15:05.553340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.356 [2024-07-22 11:15:05.553360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.356 [2024-07-22 11:15:05.557668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.356 [2024-07-22 11:15:05.557754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.356 [2024-07-22 11:15:05.557773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.631 [2024-07-22 11:15:05.562126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.631 [2024-07-22 11:15:05.562218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.631 [2024-07-22 11:15:05.562238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.631 [2024-07-22 11:15:05.566590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.631 [2024-07-22 11:15:05.566694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.631 [2024-07-22 11:15:05.566714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.631 [2024-07-22 11:15:05.570936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.631 [2024-07-22 11:15:05.571103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.631 [2024-07-22 11:15:05.571122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.631 [2024-07-22 11:15:05.574784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.631 [2024-07-22 11:15:05.575150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.631 [2024-07-22 11:15:05.575169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.631 [2024-07-22 11:15:05.579085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.631 [2024-07-22 11:15:05.579152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.631 [2024-07-22 11:15:05.579171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.631 [2024-07-22 11:15:05.583450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.631 [2024-07-22 11:15:05.583520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.631 [2024-07-22 11:15:05.583540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.631 [2024-07-22 11:15:05.587888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.631 [2024-07-22 11:15:05.587951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.631 [2024-07-22 11:15:05.587977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.631 [2024-07-22 11:15:05.592283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.631 [2024-07-22 11:15:05.592348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.631 [2024-07-22 11:15:05.592367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.631 [2024-07-22 11:15:05.596768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.631 [2024-07-22 11:15:05.596839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.631 [2024-07-22 11:15:05.596871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.631 [2024-07-22 11:15:05.601388] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.632 [2024-07-22 11:15:05.601462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.632 [2024-07-22 11:15:05.601482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.632 [2024-07-22 11:15:05.605779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.632 [2024-07-22 11:15:05.605859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.632 [2024-07-22 11:15:05.605879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.632 [2024-07-22 11:15:05.610082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.632 [2024-07-22 11:15:05.610238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.632 [2024-07-22 11:15:05.610257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.632 [2024-07-22 11:15:05.614404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.632 [2024-07-22 11:15:05.614625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.632 [2024-07-22 11:15:05.614644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.632 [2024-07-22 11:15:05.618729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.632 [2024-07-22 11:15:05.618913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.632 [2024-07-22 11:15:05.618932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.632 [2024-07-22 11:15:05.622590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.632 [2024-07-22 11:15:05.622959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.632 [2024-07-22 11:15:05.622984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.632 [2024-07-22 11:15:05.626785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.632 [2024-07-22 11:15:05.626850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.632 [2024-07-22 11:15:05.626870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.632 [2024-07-22 11:15:05.631183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.632 [2024-07-22 11:15:05.631250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.632 [2024-07-22 11:15:05.631270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.632 [2024-07-22 11:15:05.635413] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.632 [2024-07-22 11:15:05.635481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.632 [2024-07-22 11:15:05.635501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.632 [2024-07-22 11:15:05.639804] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.632 [2024-07-22 11:15:05.639911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.632 [2024-07-22 11:15:05.639930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.632 [2024-07-22 11:15:05.644206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.632 [2024-07-22 11:15:05.644271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.632 [2024-07-22 11:15:05.644291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.632 [2024-07-22 11:15:05.648577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.632 [2024-07-22 11:15:05.648745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.632 [2024-07-22 11:15:05.648765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.632 [2024-07-22 11:15:05.652978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.632 [2024-07-22 11:15:05.653074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.632 [2024-07-22 11:15:05.653093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.632 [2024-07-22 11:15:05.657432] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.632 [2024-07-22 11:15:05.657637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.632 [2024-07-22 11:15:05.657657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.632 [2024-07-22 11:15:05.662056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.632 [2024-07-22 11:15:05.662235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.632 [2024-07-22 11:15:05.662255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.632 [2024-07-22 11:15:05.666038] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.632 [2024-07-22 11:15:05.666406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.632 [2024-07-22 11:15:05.666433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.632 [2024-07-22 11:15:05.670356] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.632 [2024-07-22 11:15:05.670422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.632 [2024-07-22 11:15:05.670442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.632 [2024-07-22 11:15:05.674855] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.632 [2024-07-22 11:15:05.674941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.632 [2024-07-22 11:15:05.674961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.632 [2024-07-22 11:15:05.679231] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.632 [2024-07-22 11:15:05.679299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.632 [2024-07-22 11:15:05.679319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.632 [2024-07-22 11:15:05.683585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.632 [2024-07-22 11:15:05.683655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.632 [2024-07-22 11:15:05.683674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.632 [2024-07-22 11:15:05.688025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.632 [2024-07-22 11:15:05.688097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.632 [2024-07-22 11:15:05.688116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.632 [2024-07-22 11:15:05.692391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.632 [2024-07-22 11:15:05.692472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.632 [2024-07-22 11:15:05.692492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.632 [2024-07-22 11:15:05.696741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.632 [2024-07-22 11:15:05.696806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.632 [2024-07-22 11:15:05.696826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.632 [2024-07-22 11:15:05.701036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.632 [2024-07-22 11:15:05.701202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.632 [2024-07-22 11:15:05.701221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.632 [2024-07-22 11:15:05.704891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.632 [2024-07-22 11:15:05.705232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.632 [2024-07-22 11:15:05.705258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.632 [2024-07-22 11:15:05.708979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.632 [2024-07-22 11:15:05.709045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.632 [2024-07-22 11:15:05.709064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.632 [2024-07-22 11:15:05.713252] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.632 [2024-07-22 11:15:05.713315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.632 [2024-07-22 11:15:05.713334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.632 [2024-07-22 11:15:05.717641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.632 [2024-07-22 11:15:05.717707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.632 [2024-07-22 11:15:05.717727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.632 [2024-07-22 11:15:05.721970] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.632 [2024-07-22 11:15:05.722033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.632 [2024-07-22 11:15:05.722053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.633 [2024-07-22 11:15:05.726472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.633 [2024-07-22 11:15:05.726550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.633 [2024-07-22 11:15:05.726569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.633 [2024-07-22 11:15:05.731108] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.633 [2024-07-22 11:15:05.731177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.633 [2024-07-22 11:15:05.731197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.633 [2024-07-22 11:15:05.735717] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.633 [2024-07-22 11:15:05.735788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.633 [2024-07-22 11:15:05.735808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.633 [2024-07-22 11:15:05.740333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.633 [2024-07-22 11:15:05.740403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.633 [2024-07-22 11:15:05.740423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.633 [2024-07-22 11:15:05.744703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.633 [2024-07-22 11:15:05.744774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.633 [2024-07-22 11:15:05.744793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.633 [2024-07-22 11:15:05.749036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.633 [2024-07-22 11:15:05.749165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.633 [2024-07-22 11:15:05.749183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.633 [2024-07-22 11:15:05.753398] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.633 [2024-07-22 11:15:05.753602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.633 [2024-07-22 11:15:05.753621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.633 [2024-07-22 11:15:05.757279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.633 [2024-07-22 11:15:05.757676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.633 [2024-07-22 11:15:05.757718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.633 [2024-07-22 11:15:05.761581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.633 [2024-07-22 11:15:05.761644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.633 [2024-07-22 11:15:05.761663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.633 [2024-07-22 11:15:05.765920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.633 [2024-07-22 11:15:05.765988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.633 [2024-07-22 11:15:05.766008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.633 [2024-07-22 11:15:05.770180] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.633 [2024-07-22 11:15:05.770242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.633 [2024-07-22 11:15:05.770262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.633 [2024-07-22 11:15:05.774480] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.633 [2024-07-22 11:15:05.774549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.633 [2024-07-22 11:15:05.774569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.633 [2024-07-22 11:15:05.778936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.633 [2024-07-22 11:15:05.779003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.633 [2024-07-22 11:15:05.779022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.633 [2024-07-22 11:15:05.783333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.633 [2024-07-22 11:15:05.783401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.633 [2024-07-22 11:15:05.783420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.633 [2024-07-22 11:15:05.787789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.633 [2024-07-22 11:15:05.787889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.633 [2024-07-22 11:15:05.787909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.633 [2024-07-22 11:15:05.792194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.633 [2024-07-22 11:15:05.792358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.633 [2024-07-22 11:15:05.792377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.633 [2024-07-22 11:15:05.796063] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.633 [2024-07-22 11:15:05.796411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.633 [2024-07-22 11:15:05.796436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.633 [2024-07-22 11:15:05.800426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.633 [2024-07-22 11:15:05.800904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.633 [2024-07-22 11:15:05.800929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.633 [2024-07-22 11:15:05.804589] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.633 [2024-07-22 11:15:05.804679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.633 [2024-07-22 11:15:05.804698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.633 [2024-07-22 11:15:05.808931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.633 [2024-07-22 11:15:05.809010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.633 [2024-07-22 11:15:05.809029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.633 [2024-07-22 11:15:05.813487] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.633 [2024-07-22 11:15:05.813559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.633 [2024-07-22 11:15:05.813579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.633 [2024-07-22 11:15:05.817896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.633 [2024-07-22 11:15:05.817970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.633 [2024-07-22 11:15:05.817990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.633 [2024-07-22 11:15:05.822264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.633 [2024-07-22 11:15:05.822335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.633 [2024-07-22 11:15:05.822356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.892 [2024-07-22 11:15:05.826762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.892 [2024-07-22 11:15:05.826828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.892 [2024-07-22 11:15:05.826847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.892 [2024-07-22 11:15:05.831138] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.892 [2024-07-22 11:15:05.831256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.892 [2024-07-22 11:15:05.831276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.892 [2024-07-22 11:15:05.835404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.892 [2024-07-22 11:15:05.835559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.892 [2024-07-22 11:15:05.835579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.892 [2024-07-22 11:15:05.839269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.892 [2024-07-22 11:15:05.839622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.892 [2024-07-22 11:15:05.839649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.892 [2024-07-22 11:15:05.843454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.892 [2024-07-22 11:15:05.843514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.892 [2024-07-22 11:15:05.843533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.892 [2024-07-22 11:15:05.847800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.892 [2024-07-22 11:15:05.847889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.892 [2024-07-22 11:15:05.847909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.892 [2024-07-22 11:15:05.852273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.892 [2024-07-22 11:15:05.852336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.892 [2024-07-22 11:15:05.852355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.892 [2024-07-22 11:15:05.856761] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.892 [2024-07-22 11:15:05.856829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.892 [2024-07-22 11:15:05.856860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.892 [2024-07-22 11:15:05.861227] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.892 [2024-07-22 11:15:05.861306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.892 [2024-07-22 11:15:05.861325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.892 [2024-07-22 11:15:05.865594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.892 [2024-07-22 11:15:05.865661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.892 [2024-07-22 11:15:05.865681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.892 [2024-07-22 11:15:05.869943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.892 [2024-07-22 11:15:05.870044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.892 [2024-07-22 11:15:05.870064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.892 [2024-07-22 11:15:05.874379] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.892 [2024-07-22 11:15:05.874468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.892 [2024-07-22 11:15:05.874488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.892 [2024-07-22 11:15:05.878815] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.892 [2024-07-22 11:15:05.878907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.892 [2024-07-22 11:15:05.878927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.892 [2024-07-22 11:15:05.882683] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.892 [2024-07-22 11:15:05.883140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.892 [2024-07-22 11:15:05.883165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.892 [2024-07-22 11:15:05.886868] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.892 [2024-07-22 11:15:05.886954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.892 [2024-07-22 11:15:05.886973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.892 [2024-07-22 11:15:05.891282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.892 [2024-07-22 11:15:05.891345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.892 [2024-07-22 11:15:05.891364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.892 [2024-07-22 11:15:05.895725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.892 [2024-07-22 11:15:05.895788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.892 [2024-07-22 11:15:05.895808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.892 [2024-07-22 11:15:05.899979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.892 [2024-07-22 11:15:05.900045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.892 [2024-07-22 11:15:05.900064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.892 [2024-07-22 11:15:05.904389] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.892 [2024-07-22 11:15:05.904457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.892 [2024-07-22 11:15:05.904476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.892 [2024-07-22 11:15:05.908831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.892 [2024-07-22 11:15:05.908916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.892 [2024-07-22 11:15:05.908936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.892 [2024-07-22 11:15:05.913261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.892 [2024-07-22 11:15:05.913384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.892 [2024-07-22 11:15:05.913404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.892 [2024-07-22 11:15:05.917728] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.892 [2024-07-22 11:15:05.917887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.892 [2024-07-22 11:15:05.917906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.892 [2024-07-22 11:15:05.921645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.892 [2024-07-22 11:15:05.921991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.892 [2024-07-22 11:15:05.922016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.892 [2024-07-22 11:15:05.925789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.892 [2024-07-22 11:15:05.925870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.892 [2024-07-22 11:15:05.925891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.892 [2024-07-22 11:15:05.930139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.892 [2024-07-22 11:15:05.930208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.892 [2024-07-22 11:15:05.930227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.892 [2024-07-22 11:15:05.934563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.892 [2024-07-22 11:15:05.934625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.892 [2024-07-22 11:15:05.934644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.892 [2024-07-22 11:15:05.939041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.892 [2024-07-22 11:15:05.939109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.892 [2024-07-22 11:15:05.939129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.892 [2024-07-22 11:15:05.943540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.892 [2024-07-22 11:15:05.943604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.892 [2024-07-22 11:15:05.943624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.892 [2024-07-22 11:15:05.947950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.892 [2024-07-22 11:15:05.948011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.892 [2024-07-22 11:15:05.948030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.892 [2024-07-22 11:15:05.952280] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.892 [2024-07-22 11:15:05.952357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.892 [2024-07-22 11:15:05.952376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.892 [2024-07-22 11:15:05.956727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.892 [2024-07-22 11:15:05.956814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.892 [2024-07-22 11:15:05.956834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.892 [2024-07-22 11:15:05.961015] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.892 [2024-07-22 11:15:05.961087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.893 [2024-07-22 11:15:05.961106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.893 [2024-07-22 11:15:05.964898] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.893 [2024-07-22 11:15:05.965319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.893 [2024-07-22 11:15:05.965346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.893 [2024-07-22 11:15:05.969172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.893 [2024-07-22 11:15:05.969262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.893 [2024-07-22 11:15:05.969282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.893 [2024-07-22 11:15:05.973580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.893 [2024-07-22 11:15:05.973656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.893 [2024-07-22 11:15:05.973675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.893 [2024-07-22 11:15:05.977895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.893 [2024-07-22 11:15:05.977957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.893 [2024-07-22 11:15:05.977977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.893 [2024-07-22 11:15:05.982440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.893 [2024-07-22 11:15:05.982534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.893 [2024-07-22 11:15:05.982554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.893 [2024-07-22 11:15:05.986957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.893 [2024-07-22 11:15:05.987080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.893 [2024-07-22 11:15:05.987099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.893 [2024-07-22 11:15:05.991313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.893 [2024-07-22 11:15:05.991420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.893 [2024-07-22 11:15:05.991440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.893 [2024-07-22 11:15:05.995772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.893 [2024-07-22 11:15:05.995936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.893 [2024-07-22 11:15:05.995955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.893 [2024-07-22 11:15:05.999774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.893 [2024-07-22 11:15:06.000142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.893 [2024-07-22 11:15:06.000170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.893 [2024-07-22 11:15:06.003962] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.893 [2024-07-22 11:15:06.004026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.893 [2024-07-22 11:15:06.004045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.893 [2024-07-22 11:15:06.008298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.893 [2024-07-22 11:15:06.008377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.893 [2024-07-22 11:15:06.008397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.893 [2024-07-22 11:15:06.012576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.893 [2024-07-22 11:15:06.012649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.893 [2024-07-22 11:15:06.012669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.893 [2024-07-22 11:15:06.017018] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.893 [2024-07-22 11:15:06.017113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.893 [2024-07-22 11:15:06.017133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.893 [2024-07-22 11:15:06.021368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.893 [2024-07-22 11:15:06.021470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.893 [2024-07-22 11:15:06.021490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.893 [2024-07-22 11:15:06.025776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.893 [2024-07-22 11:15:06.025914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.893 [2024-07-22 11:15:06.025935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.893 [2024-07-22 11:15:06.030211] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.893 [2024-07-22 11:15:06.030310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.893 [2024-07-22 11:15:06.030332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.893 [2024-07-22 11:15:06.034624] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.893 [2024-07-22 11:15:06.034699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.893 [2024-07-22 11:15:06.034718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.893 [2024-07-22 11:15:06.038529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.893 [2024-07-22 11:15:06.038975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.893 [2024-07-22 11:15:06.039002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.893 [2024-07-22 11:15:06.042727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.893 [2024-07-22 11:15:06.042816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.893 [2024-07-22 11:15:06.042835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:18:00.893 [2024-07-22 11:15:06.047035] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.893 [2024-07-22 11:15:06.047096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.893 [2024-07-22 11:15:06.047117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:18:00.893 [2024-07-22 11:15:06.051321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.893 [2024-07-22 11:15:06.051384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.893 [2024-07-22 11:15:06.051404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:18:00.893 [2024-07-22 11:15:06.055796] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb20b0) with pdu=0x2000190fef90 01:18:00.893 [2024-07-22 11:15:06.055876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:18:00.893 [2024-07-22 11:15:06.055897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:18:00.893 01:18:00.893 Latency(us) 01:18:00.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:18:00.893 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 01:18:00.893 nvme0n1 : 2.00 7114.47 889.31 0.00 0.00 2245.16 1473.90 5342.89 01:18:00.893 =================================================================================================================== 01:18:00.893 Total : 7114.47 889.31 0.00 0.00 2245.16 1473.90 5342.89 01:18:00.893 0 01:18:00.893 11:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 01:18:00.893 11:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 01:18:00.893 11:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 01:18:00.893 | .driver_specific 01:18:00.893 | .nvme_error 01:18:00.893 | .status_code 01:18:00.893 | .command_transient_transport_error' 01:18:00.893 11:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 01:18:01.151 11:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 459 > 0 )) 01:18:01.151 11:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95327 01:18:01.151 11:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 95327 ']' 01:18:01.151 11:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 95327 01:18:01.151 11:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 01:18:01.151 11:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:18:01.151 11:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95327 01:18:01.151 killing process with pid 95327 01:18:01.151 Received shutdown signal, test time was about 2.000000 seconds 01:18:01.151 01:18:01.151 Latency(us) 01:18:01.151 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:18:01.151 =================================================================================================================== 01:18:01.151 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:18:01.151 11:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:18:01.151 11:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:18:01.151 11:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95327' 01:18:01.151 11:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 95327 01:18:01.151 11:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 95327 01:18:01.716 11:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 95123 01:18:01.716 11:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 95123 ']' 01:18:01.716 11:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 95123 01:18:01.716 11:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 01:18:01.716 11:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:18:01.716 11:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95123 01:18:01.716 killing process with pid 95123 01:18:01.716 11:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:18:01.716 11:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:18:01.716 11:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95123' 01:18:01.716 11:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 95123 01:18:01.716 11:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 95123 01:18:01.974 01:18:01.974 real 0m17.758s 01:18:01.974 user 0m32.351s 01:18:01.974 sys 0m5.587s 01:18:01.974 ************************************ 01:18:01.974 END TEST nvmf_digest_error 01:18:01.974 ************************************ 01:18:01.974 11:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 01:18:01.974 11:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:18:01.974 11:15:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 01:18:01.974 11:15:07 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 01:18:01.974 11:15:07 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 01:18:01.974 11:15:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 01:18:01.974 11:15:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 01:18:01.974 11:15:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:18:01.974 11:15:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 01:18:01.974 11:15:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 01:18:01.974 11:15:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:18:01.974 rmmod nvme_tcp 01:18:01.974 rmmod nvme_fabrics 01:18:01.974 rmmod nvme_keyring 01:18:01.974 11:15:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:18:02.233 11:15:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 01:18:02.233 11:15:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 01:18:02.233 11:15:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 95123 ']' 01:18:02.233 11:15:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 95123 01:18:02.233 11:15:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 95123 ']' 01:18:02.233 11:15:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 95123 01:18:02.233 Process with pid 95123 is not found 01:18:02.233 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (95123) - No such process 01:18:02.233 11:15:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 95123 is not found' 01:18:02.233 11:15:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:18:02.233 11:15:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:18:02.233 11:15:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:18:02.233 11:15:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:18:02.233 11:15:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 01:18:02.233 11:15:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:18:02.233 11:15:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:18:02.233 11:15:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:18:02.233 11:15:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:18:02.233 01:18:02.233 real 0m36.034s 01:18:02.233 user 1m3.671s 01:18:02.233 sys 0m12.012s 01:18:02.233 11:15:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 01:18:02.233 ************************************ 01:18:02.233 END TEST nvmf_digest 01:18:02.233 ************************************ 01:18:02.233 11:15:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 01:18:02.233 11:15:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:18:02.233 11:15:07 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 01:18:02.233 11:15:07 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 01:18:02.233 11:15:07 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 01:18:02.233 11:15:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:18:02.233 11:15:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:18:02.233 11:15:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:18:02.233 ************************************ 01:18:02.233 START TEST nvmf_host_multipath 01:18:02.233 ************************************ 01:18:02.233 11:15:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 01:18:02.492 * Looking for test storage... 01:18:02.493 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:18:02.493 Cannot find device "nvmf_tgt_br" 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:18:02.493 Cannot find device "nvmf_tgt_br2" 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:18:02.493 Cannot find device "nvmf_tgt_br" 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:18:02.493 Cannot find device "nvmf_tgt_br2" 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:18:02.493 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:18:02.493 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:18:02.493 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:18:02.751 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:18:02.751 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:18:02.751 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:18:02.751 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:18:02.751 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:18:02.751 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:18:02.751 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:18:02.751 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:18:02.751 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:18:02.751 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:18:02.751 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:18:02.751 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:18:02.751 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:18:02.751 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:18:02.751 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:18:02.751 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:18:02.751 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:18:02.751 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:18:02.751 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:18:02.751 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:18:02.751 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:18:02.751 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:18:02.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:18:02.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 01:18:02.751 01:18:02.751 --- 10.0.0.2 ping statistics --- 01:18:02.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:02.751 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 01:18:02.751 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:18:02.751 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:18:02.751 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 01:18:02.751 01:18:02.751 --- 10.0.0.3 ping statistics --- 01:18:02.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:02.751 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 01:18:02.751 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:18:02.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:18:02.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 01:18:02.751 01:18:02.751 --- 10.0.0.1 ping statistics --- 01:18:02.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:02.751 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 01:18:03.008 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:18:03.008 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 01:18:03.008 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:18:03.008 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:18:03.008 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:18:03.008 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:18:03.008 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:18:03.008 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:18:03.008 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:18:03.008 11:15:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 01:18:03.008 11:15:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:18:03.008 11:15:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 01:18:03.008 11:15:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:18:03.008 11:15:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 01:18:03.008 11:15:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=95594 01:18:03.008 11:15:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 95594 01:18:03.008 11:15:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 95594 ']' 01:18:03.008 11:15:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:18:03.008 11:15:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 01:18:03.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:18:03.008 11:15:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:18:03.008 11:15:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 01:18:03.008 11:15:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:18:03.008 [2024-07-22 11:15:08.059268] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:18:03.008 [2024-07-22 11:15:08.059344] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:18:03.008 [2024-07-22 11:15:08.203181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 01:18:03.266 [2024-07-22 11:15:08.273162] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:18:03.266 [2024-07-22 11:15:08.273234] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:18:03.266 [2024-07-22 11:15:08.273244] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:18:03.266 [2024-07-22 11:15:08.273253] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:18:03.266 [2024-07-22 11:15:08.273260] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:18:03.266 [2024-07-22 11:15:08.273512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:18:03.266 [2024-07-22 11:15:08.273513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:18:03.266 [2024-07-22 11:15:08.348315] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:18:03.831 11:15:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:18:03.831 11:15:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 01:18:03.831 11:15:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:18:03.831 11:15:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 01:18:03.831 11:15:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:18:03.831 11:15:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:18:03.831 11:15:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=95594 01:18:03.831 11:15:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:18:04.087 [2024-07-22 11:15:09.157260] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:18:04.087 11:15:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:18:04.345 Malloc0 01:18:04.345 11:15:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 01:18:04.602 11:15:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:18:04.602 11:15:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:18:04.858 [2024-07-22 11:15:09.979625] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:18:04.859 11:15:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 01:18:05.116 [2024-07-22 11:15:10.191407] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 01:18:05.116 11:15:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 01:18:05.116 11:15:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=95646 01:18:05.116 11:15:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:18:05.116 11:15:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 95646 /var/tmp/bdevperf.sock 01:18:05.116 11:15:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 95646 ']' 01:18:05.116 11:15:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:18:05.116 11:15:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 01:18:05.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:18:05.116 11:15:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:18:05.116 11:15:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 01:18:05.116 11:15:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:18:06.046 11:15:11 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:18:06.046 11:15:11 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 01:18:06.046 11:15:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 01:18:06.316 11:15:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 01:18:06.574 Nvme0n1 01:18:06.574 11:15:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 01:18:06.833 Nvme0n1 01:18:06.833 11:15:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 01:18:06.833 11:15:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 01:18:07.769 11:15:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 01:18:07.769 11:15:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:18:08.026 11:15:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 01:18:08.282 11:15:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 01:18:08.282 11:15:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95594 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:18:08.282 11:15:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95691 01:18:08.282 11:15:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:18:14.837 11:15:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:18:14.837 11:15:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 01:18:14.837 11:15:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 01:18:14.837 11:15:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:18:14.837 Attaching 4 probes... 01:18:14.837 @path[10.0.0.2, 4421]: 22526 01:18:14.837 @path[10.0.0.2, 4421]: 22823 01:18:14.837 @path[10.0.0.2, 4421]: 21017 01:18:14.837 @path[10.0.0.2, 4421]: 19800 01:18:14.837 @path[10.0.0.2, 4421]: 17617 01:18:14.837 11:15:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 01:18:14.837 11:15:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:18:14.837 11:15:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:18:14.837 11:15:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 01:18:14.837 11:15:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 01:18:14.837 11:15:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 01:18:14.837 11:15:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95691 01:18:14.837 11:15:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:18:14.837 11:15:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 01:18:14.837 11:15:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:18:14.837 11:15:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 01:18:14.837 11:15:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 01:18:14.837 11:15:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95803 01:18:14.837 11:15:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:18:14.837 11:15:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95594 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:18:21.398 11:15:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 01:18:21.398 11:15:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:18:21.398 11:15:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 01:18:21.398 11:15:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:18:21.398 Attaching 4 probes... 01:18:21.398 @path[10.0.0.2, 4420]: 22368 01:18:21.398 @path[10.0.0.2, 4420]: 23006 01:18:21.398 @path[10.0.0.2, 4420]: 22408 01:18:21.398 @path[10.0.0.2, 4420]: 22792 01:18:21.398 @path[10.0.0.2, 4420]: 22773 01:18:21.398 11:15:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 01:18:21.398 11:15:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:18:21.398 11:15:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:18:21.398 11:15:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 01:18:21.398 11:15:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 01:18:21.398 11:15:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 01:18:21.398 11:15:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95803 01:18:21.398 11:15:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:18:21.398 11:15:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 01:18:21.398 11:15:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 01:18:21.398 11:15:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 01:18:21.655 11:15:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 01:18:21.655 11:15:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95916 01:18:21.655 11:15:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:18:21.655 11:15:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95594 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:18:28.206 11:15:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:18:28.206 11:15:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 01:18:28.206 11:15:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 01:18:28.206 11:15:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:18:28.206 Attaching 4 probes... 01:18:28.206 @path[10.0.0.2, 4421]: 16970 01:18:28.206 @path[10.0.0.2, 4421]: 21736 01:18:28.206 @path[10.0.0.2, 4421]: 21286 01:18:28.206 @path[10.0.0.2, 4421]: 21936 01:18:28.206 @path[10.0.0.2, 4421]: 21387 01:18:28.206 11:15:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 01:18:28.206 11:15:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:18:28.206 11:15:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:18:28.206 11:15:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 01:18:28.206 11:15:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 01:18:28.206 11:15:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 01:18:28.206 11:15:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95916 01:18:28.206 11:15:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:18:28.206 11:15:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 01:18:28.206 11:15:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 01:18:28.206 11:15:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 01:18:28.206 11:15:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 01:18:28.206 11:15:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96028 01:18:28.206 11:15:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:18:28.206 11:15:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95594 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:18:34.773 11:15:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:18:34.773 11:15:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 01:18:34.773 11:15:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 01:18:34.773 11:15:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:18:34.773 Attaching 4 probes... 01:18:34.773 01:18:34.773 01:18:34.773 01:18:34.773 01:18:34.773 01:18:34.773 11:15:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 01:18:34.773 11:15:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:18:34.773 11:15:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:18:34.773 11:15:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 01:18:34.773 11:15:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 01:18:34.773 11:15:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 01:18:34.773 11:15:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96028 01:18:34.773 11:15:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:18:34.773 11:15:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 01:18:34.773 11:15:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:18:34.773 11:15:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 01:18:34.773 11:15:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 01:18:34.773 11:15:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95594 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:18:34.773 11:15:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96141 01:18:34.773 11:15:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:18:41.346 11:15:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:18:41.347 11:15:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 01:18:41.347 11:15:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 01:18:41.347 11:15:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:18:41.347 Attaching 4 probes... 01:18:41.347 @path[10.0.0.2, 4421]: 21249 01:18:41.347 @path[10.0.0.2, 4421]: 22525 01:18:41.347 @path[10.0.0.2, 4421]: 22664 01:18:41.347 @path[10.0.0.2, 4421]: 22702 01:18:41.347 @path[10.0.0.2, 4421]: 22655 01:18:41.347 11:15:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:18:41.347 11:15:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 01:18:41.347 11:15:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:18:41.347 11:15:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 01:18:41.347 11:15:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 01:18:41.347 11:15:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 01:18:41.347 11:15:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96141 01:18:41.347 11:15:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:18:41.347 11:15:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 01:18:41.347 11:15:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 01:18:42.280 11:15:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 01:18:42.280 11:15:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96266 01:18:42.281 11:15:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:18:42.281 11:15:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95594 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:18:48.845 11:15:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:18:48.845 11:15:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 01:18:48.845 11:15:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 01:18:48.845 11:15:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:18:48.845 Attaching 4 probes... 01:18:48.845 @path[10.0.0.2, 4420]: 22270 01:18:48.845 @path[10.0.0.2, 4420]: 22741 01:18:48.845 @path[10.0.0.2, 4420]: 22673 01:18:48.845 @path[10.0.0.2, 4420]: 22686 01:18:48.845 @path[10.0.0.2, 4420]: 22659 01:18:48.845 11:15:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:18:48.845 11:15:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 01:18:48.845 11:15:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:18:48.845 11:15:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 01:18:48.845 11:15:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 01:18:48.845 11:15:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 01:18:48.845 11:15:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96266 01:18:48.845 11:15:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:18:48.845 11:15:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 01:18:48.845 [2024-07-22 11:15:53.769124] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 01:18:48.845 11:15:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 01:18:48.845 11:15:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 01:18:55.429 11:15:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 01:18:55.429 11:15:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96440 01:18:55.429 11:15:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:18:55.429 11:15:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95594 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:19:01.995 11:16:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:19:01.995 11:16:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 01:19:01.995 11:16:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 01:19:01.995 11:16:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:19:01.995 Attaching 4 probes... 01:19:01.995 @path[10.0.0.2, 4421]: 19793 01:19:01.995 @path[10.0.0.2, 4421]: 20489 01:19:01.995 @path[10.0.0.2, 4421]: 20683 01:19:01.995 @path[10.0.0.2, 4421]: 20990 01:19:01.995 @path[10.0.0.2, 4421]: 20896 01:19:01.995 11:16:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:19:01.995 11:16:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 01:19:01.995 11:16:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:19:01.995 11:16:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 01:19:01.995 11:16:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 01:19:01.995 11:16:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 01:19:01.995 11:16:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96440 01:19:01.995 11:16:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:19:01.995 11:16:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 95646 01:19:01.995 11:16:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 95646 ']' 01:19:01.995 11:16:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 95646 01:19:01.995 11:16:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 01:19:01.995 11:16:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:19:01.995 11:16:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95646 01:19:01.995 11:16:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:19:01.995 11:16:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:19:01.995 11:16:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95646' 01:19:01.995 killing process with pid 95646 01:19:01.995 11:16:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 95646 01:19:01.995 11:16:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 95646 01:19:01.995 Connection closed with partial response: 01:19:01.995 01:19:01.995 01:19:01.995 11:16:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 95646 01:19:01.995 11:16:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:19:01.995 [2024-07-22 11:15:10.242407] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:19:01.995 [2024-07-22 11:15:10.242488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95646 ] 01:19:01.995 [2024-07-22 11:15:10.386100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:19:01.995 [2024-07-22 11:15:10.430535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:19:01.995 [2024-07-22 11:15:10.472398] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:19:01.995 Running I/O for 90 seconds... 01:19:01.995 [2024-07-22 11:15:19.994440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.995 [2024-07-22 11:15:19.994506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:19:01.995 [2024-07-22 11:15:19.994554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.995 [2024-07-22 11:15:19.994570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:19:01.995 [2024-07-22 11:15:19.994590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.995 [2024-07-22 11:15:19.994603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:19:01.995 [2024-07-22 11:15:19.994634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.995 [2024-07-22 11:15:19.994646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:19:01.995 [2024-07-22 11:15:19.994663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.995 [2024-07-22 11:15:19.994676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:19:01.995 [2024-07-22 11:15:19.994693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.995 [2024-07-22 11:15:19.994705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:19:01.995 [2024-07-22 11:15:19.994723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.995 [2024-07-22 11:15:19.994735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:19:01.995 [2024-07-22 11:15:19.994753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.995 [2024-07-22 11:15:19.994765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:19:01.995 [2024-07-22 11:15:19.994783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.995 [2024-07-22 11:15:19.994795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:19:01.995 [2024-07-22 11:15:19.994813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.995 [2024-07-22 11:15:19.994825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:19:01.995 [2024-07-22 11:15:19.994842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.996 [2024-07-22 11:15:19.994886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.994904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.996 [2024-07-22 11:15:19.994916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.994934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.996 [2024-07-22 11:15:19.994946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.994965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.996 [2024-07-22 11:15:19.994977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.994995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.996 [2024-07-22 11:15:19.995007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.995025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.996 [2024-07-22 11:15:19.995037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.995068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.995081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.995099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.995112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.995129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.995141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.995159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.995172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.995190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.995204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.995222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.995234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.995251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.995274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.995292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.995304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.995322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.995334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.995352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.995365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.995383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.995395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.995413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.995426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.995444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.995456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.995474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.995486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.995504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.995516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.995534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.995546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.995564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.996 [2024-07-22 11:15:19.995576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.995596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.996 [2024-07-22 11:15:19.995608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.995626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.996 [2024-07-22 11:15:19.995638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.995660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.996 [2024-07-22 11:15:19.995673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.995691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.996 [2024-07-22 11:15:19.995704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.995721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.996 [2024-07-22 11:15:19.995734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.995751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.996 [2024-07-22 11:15:19.995764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.995782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.996 [2024-07-22 11:15:19.995794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.995815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.995827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.995855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.995868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.995886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.995899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.995917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.995929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.995948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.995960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.995978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.995990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.996008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.996020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.996043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.996055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.996073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.996 [2024-07-22 11:15:19.996086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.996109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.996 [2024-07-22 11:15:19.996123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.996143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.996 [2024-07-22 11:15:19.996157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.996175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.996 [2024-07-22 11:15:19.996188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.996211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.996 [2024-07-22 11:15:19.996224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.996242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.996 [2024-07-22 11:15:19.996255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.996272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.996 [2024-07-22 11:15:19.996285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.996304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.996 [2024-07-22 11:15:19.996316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.996343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.996357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.996374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.996387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.996405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.996417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.996440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.996452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.996470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.996483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.996501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.996514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.996532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.996544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.996562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.996574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.996592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.996 [2024-07-22 11:15:19.996605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.996623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.996 [2024-07-22 11:15:19.996636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.996654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.996 [2024-07-22 11:15:19.996666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.996684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.996 [2024-07-22 11:15:19.996697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.996716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.996 [2024-07-22 11:15:19.996729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.996747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.996 [2024-07-22 11:15:19.996759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.996777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.996 [2024-07-22 11:15:19.996789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.996807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.996 [2024-07-22 11:15:19.996824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.996844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.996867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.996884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.996897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.996914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.996927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.996945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.996957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.996975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.996988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.997005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.997018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.997036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.997048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.997066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.996 [2024-07-22 11:15:19.997078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.997096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.996 [2024-07-22 11:15:19.997108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:19:01.996 [2024-07-22 11:15:19.997127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.996 [2024-07-22 11:15:19.997140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.997157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.997 [2024-07-22 11:15:19.997170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.997188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.997 [2024-07-22 11:15:19.997204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.997223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.997 [2024-07-22 11:15:19.997236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.997254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.997 [2024-07-22 11:15:19.997266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.997284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.997 [2024-07-22 11:15:19.997296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.997314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.997 [2024-07-22 11:15:19.997327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.997355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:19.997369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.997387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:19.997399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.997417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:19.997429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.997447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:19.997468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.997501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:19.997515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.997533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:19.997547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.997566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:19.997579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.997598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:19.997611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.997634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:19.997648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.997670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:19.997683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.997702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:19.997715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.997734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:19.997747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.997766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:19.997779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.997799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:19.997812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.997830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:19.997843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.997871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:19.997885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.997904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.997 [2024-07-22 11:15:19.997917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.997936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.997 [2024-07-22 11:15:19.997949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.997968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.997 [2024-07-22 11:15:19.997981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.998000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.997 [2024-07-22 11:15:19.998013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.998039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.997 [2024-07-22 11:15:19.998052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.998071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.997 [2024-07-22 11:15:19.998084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.998103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.997 [2024-07-22 11:15:19.998116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.999250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.997 [2024-07-22 11:15:19.999278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.999301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:19.999314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.999335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:19.999348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.999366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:19.999379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.999396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:19.999409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.999427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:19.999440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.999457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:19.999470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.999487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:19.999500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.999616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:19.999632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.999652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.997 [2024-07-22 11:15:19.999673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.999692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.997 [2024-07-22 11:15:19.999704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.999722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.997 [2024-07-22 11:15:19.999734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.999752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.997 [2024-07-22 11:15:19.999765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.999783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.997 [2024-07-22 11:15:19.999795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.999813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.997 [2024-07-22 11:15:19.999825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.999843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.997 [2024-07-22 11:15:19.999867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:19.999886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.997 [2024-07-22 11:15:19.999898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:26.493031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:87064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:26.493088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:26.493133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:26.493148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:26.493167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:26.493179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:26.493197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:26.493209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:26.493227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:26.493256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:26.493274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:87104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:26.493286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:26.493304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:26.493316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:26.493334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:26.493346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:26.493364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.997 [2024-07-22 11:15:26.493376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:26.493394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:86624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.997 [2024-07-22 11:15:26.493407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:26.493424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:86632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.997 [2024-07-22 11:15:26.493436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:26.493463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:86640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.997 [2024-07-22 11:15:26.493476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:26.493511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:86648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.997 [2024-07-22 11:15:26.493524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:26.493543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.997 [2024-07-22 11:15:26.493556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:26.493574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:86664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.997 [2024-07-22 11:15:26.493588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:26.493606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:86672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.997 [2024-07-22 11:15:26.493620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:26.493652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:26.493666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:26.493758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:87136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:26.493773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:26.493791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:26.493804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:26.493823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:26.493836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:26.493855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:87160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:26.493879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:26.493898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:26.493911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:26.493930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:87176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:26.493943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:26.493962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.997 [2024-07-22 11:15:26.493975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:26.493994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:86680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.997 [2024-07-22 11:15:26.494007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:26.494026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:86688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.997 [2024-07-22 11:15:26.494039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:19:01.997 [2024-07-22 11:15:26.494058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:86696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.494071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.494090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:86704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.494103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.494121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:86712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.494135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.494159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:86720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.494173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.494192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:86728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.494205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.494223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:86736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.494237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.494256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.998 [2024-07-22 11:15:26.494269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.494289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.998 [2024-07-22 11:15:26.494302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.494321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.998 [2024-07-22 11:15:26.494334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.494353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.998 [2024-07-22 11:15:26.494366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.494385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:87224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.998 [2024-07-22 11:15:26.494400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.494419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.998 [2024-07-22 11:15:26.494432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.494451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:87240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.998 [2024-07-22 11:15:26.494464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.494483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.998 [2024-07-22 11:15:26.494496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.494515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:86744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.494528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.494547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.494565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.494584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:86760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.494597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.494616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:86768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.494629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.494648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.494661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.494680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:86784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.494693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.494712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:86792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.494725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.494744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:86800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.494757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.494786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.998 [2024-07-22 11:15:26.494800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.494819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.998 [2024-07-22 11:15:26.494832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.494860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.998 [2024-07-22 11:15:26.494874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.494893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.998 [2024-07-22 11:15:26.494907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.494925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.998 [2024-07-22 11:15:26.494938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.494957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.998 [2024-07-22 11:15:26.494975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.494994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:87304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.998 [2024-07-22 11:15:26.495007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.495026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:87312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.998 [2024-07-22 11:15:26.495039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.495058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:86808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.495071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.495090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:86816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.495103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.495122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:86824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.495135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.495154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:86832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.495167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.495186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:86840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.495199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.495218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:86848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.495230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.495249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:86856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.495263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.495281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:86864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.495294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.495313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.495326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.495345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.495359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.495382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:86888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.495395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.495414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:86896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.495428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.495446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:86904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.495460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.495478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:86912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.495492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.495510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:86920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.495523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.495542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.495555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.495575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.998 [2024-07-22 11:15:26.495588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.495607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:87328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.998 [2024-07-22 11:15:26.495620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.495639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.998 [2024-07-22 11:15:26.495652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.495671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.998 [2024-07-22 11:15:26.495684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.495703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.998 [2024-07-22 11:15:26.495716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.495735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.998 [2024-07-22 11:15:26.495748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.495771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:87368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.998 [2024-07-22 11:15:26.495784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.495803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.998 [2024-07-22 11:15:26.495815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.495834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.495857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.495881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:86944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.495894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.495913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:86952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.495926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.495945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:86960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.495958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.495977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:86968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.495990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.496009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:86976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.496022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.496041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:86984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.496054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.496073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:86992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.998 [2024-07-22 11:15:26.496086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:19:01.998 [2024-07-22 11:15:26.496116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:26.496130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.496149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:26.496162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.496181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:26.496200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.496220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:26.496233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.496251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:26.496264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.496289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:26.496303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.496321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:26.496334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.496353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:26.496366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.496385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:26.496398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.496418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:26.496431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.496450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:26.496463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.496482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:26.496495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.496513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:26.496526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.496546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:26.496559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.496578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:26.496595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.496613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:26.496627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.496645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.999 [2024-07-22 11:15:26.496659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.496677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.999 [2024-07-22 11:15:26.496690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.496709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.999 [2024-07-22 11:15:26.496722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.496741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:87024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.999 [2024-07-22 11:15:26.496754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.496773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.999 [2024-07-22 11:15:26.496786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.496806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:87040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.999 [2024-07-22 11:15:26.496820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.496839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:87048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.999 [2024-07-22 11:15:26.496862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.497441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:87056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.999 [2024-07-22 11:15:26.497475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.497506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:26.497519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.497547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:26.497561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.497587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:26.497600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.497634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:26.497648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.497673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:26.497687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.497712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:26.497726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.497752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:26.497765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.497802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:26.497816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.497843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:26.497867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.497894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:26.497907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.497933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:26.497946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.497972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:26.497985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.498011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:26.498024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.498051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:26.498065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.498090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:26.498103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:26.498135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:26.498148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.319798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:33.319869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.319915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:33.319929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.319947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:33.319961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.319978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:33.319991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.320008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:33.320021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.320039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:33.320051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.320069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:33.320082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.320100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:33.320112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.320130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:33.320143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.320161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:33.320174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.320192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:33.320204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.320222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:33.320257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.320275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:33.320287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.320305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:33.320318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.320338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:33.320351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.320369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:33.320382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.320400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.999 [2024-07-22 11:15:33.320414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.320435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.999 [2024-07-22 11:15:33.320448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.320468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.999 [2024-07-22 11:15:33.320480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.320499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.999 [2024-07-22 11:15:33.320512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.320529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.999 [2024-07-22 11:15:33.320541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.320559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.999 [2024-07-22 11:15:33.320571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.320589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.999 [2024-07-22 11:15:33.320601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.320618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.999 [2024-07-22 11:15:33.320636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.320654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.999 [2024-07-22 11:15:33.320667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.320685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:66088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.999 [2024-07-22 11:15:33.320697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.320715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.999 [2024-07-22 11:15:33.320727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.320745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.999 [2024-07-22 11:15:33.320758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.320775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.999 [2024-07-22 11:15:33.320787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.320805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.999 [2024-07-22 11:15:33.320819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.320836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:66128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.999 [2024-07-22 11:15:33.320860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.320879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:66136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:01.999 [2024-07-22 11:15:33.320891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.320913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:01.999 [2024-07-22 11:15:33.320925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:19:01.999 [2024-07-22 11:15:33.320945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.000 [2024-07-22 11:15:33.320957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.320975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.000 [2024-07-22 11:15:33.320988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.321005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.000 [2024-07-22 11:15:33.321025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.321043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.000 [2024-07-22 11:15:33.321056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.321074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.000 [2024-07-22 11:15:33.321086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.321105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.000 [2024-07-22 11:15:33.321118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.321136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.000 [2024-07-22 11:15:33.321148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.321166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.321180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.321198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:66152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.321211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.321229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.321242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.321262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:66168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.321274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.321292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.321306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.321323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.321336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.321354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:66192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.321366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.321385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.321397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.321419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.321432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.321451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.321473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.321491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.321522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.321542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.321557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.321576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.321590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.321609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.321622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.321641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.321655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.321674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:66264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.321688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.321707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.000 [2024-07-22 11:15:33.321720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.321739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.000 [2024-07-22 11:15:33.321753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.321772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.000 [2024-07-22 11:15:33.321785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.321805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.000 [2024-07-22 11:15:33.321819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.321842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.000 [2024-07-22 11:15:33.321856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.321884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.000 [2024-07-22 11:15:33.321898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.321917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.000 [2024-07-22 11:15:33.321930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.321950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.000 [2024-07-22 11:15:33.321963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.321982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.321995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.322015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:66280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.322028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.322048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:66288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.322062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.322082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:66296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.322096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.322116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:66304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.322129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.322149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:66312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.322164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.322182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.322196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.322215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.322229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.322263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.000 [2024-07-22 11:15:33.322284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.322303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.000 [2024-07-22 11:15:33.322317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.322336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.000 [2024-07-22 11:15:33.322351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.322371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.000 [2024-07-22 11:15:33.322384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.322403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.000 [2024-07-22 11:15:33.322416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.322435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.000 [2024-07-22 11:15:33.322449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.322469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.000 [2024-07-22 11:15:33.322483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.322502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:66840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.000 [2024-07-22 11:15:33.322517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.322536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.322549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.322569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.322583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.322614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.322627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.322644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.322657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.322676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.322693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.322711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.322723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.322742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.322754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.322772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:66392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.322785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.322805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.322818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.322837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:66408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.322849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.322874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:66416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.322888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.322905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:66424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.322920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.322938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.322950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.322969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:66440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.322981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.323000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:66448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.323013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.323032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:66456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.000 [2024-07-22 11:15:33.323046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.323064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.000 [2024-07-22 11:15:33.323077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.323103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.000 [2024-07-22 11:15:33.323117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.323135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.000 [2024-07-22 11:15:33.323148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:19:02.000 [2024-07-22 11:15:33.323166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:33.323178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:33.323197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:33.323211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:33.323230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:33.323244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:33.323262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:33.323275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:33.323293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:33.323306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:33.323325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:66464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.001 [2024-07-22 11:15:33.323338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:33.323356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:66472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.001 [2024-07-22 11:15:33.323369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:33.323388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:66480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.001 [2024-07-22 11:15:33.323401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:33.323420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:66488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.001 [2024-07-22 11:15:33.323433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:33.323451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:66496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.001 [2024-07-22 11:15:33.323464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:33.323487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:66504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.001 [2024-07-22 11:15:33.323500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:33.323518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:66512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.001 [2024-07-22 11:15:33.323532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:33.324068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:66520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.001 [2024-07-22 11:15:33.324093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:33.324122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:33.324136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:33.324163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:33.324177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:33.324201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:33.324216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:33.324241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:33.324255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:33.324279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:33.324292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:33.324317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:33.324330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:33.324355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:33.324368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:33.324403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:33.324417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:33.324442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:33.324455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:33.324480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:33.324502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:33.324526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:33.324539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:33.324563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:33.324576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:33.324601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:33.324620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:33.324644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:67016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:33.324658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:33.324682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:67024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:33.324695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:33.324719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:67032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:33.324732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.343800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.001 [2024-07-22 11:15:46.343866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.343891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.001 [2024-07-22 11:15:46.343904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.343918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:74728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.001 [2024-07-22 11:15:46.343931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.343944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.001 [2024-07-22 11:15:46.343956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.343969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.001 [2024-07-22 11:15:46.343981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.343995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:74752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.001 [2024-07-22 11:15:46.344007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.001 [2024-07-22 11:15:46.344049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.001 [2024-07-22 11:15:46.344074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:46.344100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:46.344126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:46.344151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:46.344177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:75192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:46.344202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:46.344227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:46.344253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:46.344279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:46.344303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:46.344332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:46.344363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:46.344389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:46.344415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:75264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:46.344441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:46.344466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:75280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:46.344491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.001 [2024-07-22 11:15:46.344516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.001 [2024-07-22 11:15:46.344541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:74792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.001 [2024-07-22 11:15:46.344567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:74800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.001 [2024-07-22 11:15:46.344592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.001 [2024-07-22 11:15:46.344617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.001 [2024-07-22 11:15:46.344642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.001 [2024-07-22 11:15:46.344668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.001 [2024-07-22 11:15:46.344698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.001 [2024-07-22 11:15:46.344724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:74848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.001 [2024-07-22 11:15:46.344749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.001 [2024-07-22 11:15:46.344775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.001 [2024-07-22 11:15:46.344800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:74872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.001 [2024-07-22 11:15:46.344826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:74880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.001 [2024-07-22 11:15:46.344899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.001 [2024-07-22 11:15:46.344925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.001 [2024-07-22 11:15:46.344951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:46.344977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.344990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:75296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:46.345002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.345016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:46.345028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.345041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:46.345057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.345071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.001 [2024-07-22 11:15:46.345083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.001 [2024-07-22 11:15:46.345097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:75328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.345109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.345134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.345160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:75352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.345187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:75360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.345213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.345238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:75376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.345265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.345290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.345315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.345340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.345366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:74904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.002 [2024-07-22 11:15:46.345395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:74912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.002 [2024-07-22 11:15:46.345420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.002 [2024-07-22 11:15:46.345446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:74928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.002 [2024-07-22 11:15:46.345479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:74936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.002 [2024-07-22 11:15:46.345505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:74944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.002 [2024-07-22 11:15:46.345531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:74952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.002 [2024-07-22 11:15:46.345556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.002 [2024-07-22 11:15:46.345582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.345608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.345634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.345660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.345686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.345711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.345741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.345766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.345791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.345817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.345842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.345875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.345901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.345925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.345951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.345976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.345990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.346002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.346016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.002 [2024-07-22 11:15:46.346028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.346043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.002 [2024-07-22 11:15:46.346058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.346072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.002 [2024-07-22 11:15:46.346084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.346098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.002 [2024-07-22 11:15:46.346110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.346123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.002 [2024-07-22 11:15:46.346135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.346149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.002 [2024-07-22 11:15:46.346161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.346175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.002 [2024-07-22 11:15:46.346186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.346200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.002 [2024-07-22 11:15:46.346212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.346225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.346237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.346250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.346262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.346276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.346288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.346301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.346313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.346327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.346339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.346352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.346364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.346384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.346396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.346409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.346421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.346435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.002 [2024-07-22 11:15:46.346447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.346460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.002 [2024-07-22 11:15:46.346473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.346487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.002 [2024-07-22 11:15:46.346499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.346513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.002 [2024-07-22 11:15:46.346525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.346539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.002 [2024-07-22 11:15:46.346551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.346564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.002 [2024-07-22 11:15:46.346576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.346589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:75080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.002 [2024-07-22 11:15:46.346601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.346615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.002 [2024-07-22 11:15:46.346627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.346640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.346652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.346665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.346677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.346691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.346706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.346720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.346732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.346745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.346757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.346771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.346783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.346796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.346809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.346822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.346834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.002 [2024-07-22 11:15:46.346856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.002 [2024-07-22 11:15:46.346868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.003 [2024-07-22 11:15:46.346882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.003 [2024-07-22 11:15:46.346895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.003 [2024-07-22 11:15:46.346908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.003 [2024-07-22 11:15:46.346921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.003 [2024-07-22 11:15:46.346935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.003 [2024-07-22 11:15:46.346947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.003 [2024-07-22 11:15:46.346960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.003 [2024-07-22 11:15:46.346972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.003 [2024-07-22 11:15:46.346986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.003 [2024-07-22 11:15:46.346998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.003 [2024-07-22 11:15:46.347011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.003 [2024-07-22 11:15:46.347023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.003 [2024-07-22 11:15:46.347037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:02.003 [2024-07-22 11:15:46.347054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.003 [2024-07-22 11:15:46.347067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.003 [2024-07-22 11:15:46.347079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.003 [2024-07-22 11:15:46.347093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.003 [2024-07-22 11:15:46.347105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.003 [2024-07-22 11:15:46.347119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.003 [2024-07-22 11:15:46.347131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.003 [2024-07-22 11:15:46.347145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.003 [2024-07-22 11:15:46.347157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.003 [2024-07-22 11:15:46.347170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:75128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.003 [2024-07-22 11:15:46.347182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.003 [2024-07-22 11:15:46.347196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:75136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.003 [2024-07-22 11:15:46.347208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.003 [2024-07-22 11:15:46.347221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:02.003 [2024-07-22 11:15:46.347233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.003 [2024-07-22 11:15:46.347279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:02.003 [2024-07-22 11:15:46.347290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:02.003 [2024-07-22 11:15:46.347300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75152 len:8 PRP1 0x0 PRP2 0x0 01:19:02.003 [2024-07-22 11:15:46.347313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:02.003 [2024-07-22 11:15:46.347362] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cb6af0 was disconnected and freed. reset controller. 01:19:02.003 [2024-07-22 11:15:46.348261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:19:02.003 [2024-07-22 11:15:46.348329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e56cc0 (9): Bad file descriptor 01:19:02.003 [2024-07-22 11:15:46.348604] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:19:02.003 [2024-07-22 11:15:46.348626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e56cc0 with addr=10.0.0.2, port=4421 01:19:02.003 [2024-07-22 11:15:46.348641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56cc0 is same with the state(5) to be set 01:19:02.003 [2024-07-22 11:15:46.348771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e56cc0 (9): Bad file descriptor 01:19:02.003 [2024-07-22 11:15:46.348815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:19:02.003 [2024-07-22 11:15:46.348838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:19:02.003 [2024-07-22 11:15:46.348865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:19:02.003 [2024-07-22 11:15:46.348891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:19:02.003 [2024-07-22 11:15:46.348903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:19:02.003 [2024-07-22 11:15:56.367604] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 01:19:02.003 Received shutdown signal, test time was about 54.329993 seconds 01:19:02.003 01:19:02.003 Latency(us) 01:19:02.003 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:19:02.003 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:19:02.003 Verification LBA range: start 0x0 length 0x4000 01:19:02.003 Nvme0n1 : 54.33 9323.61 36.42 0.00 0.00 13713.26 829.07 7061253.96 01:19:02.003 =================================================================================================================== 01:19:02.003 Total : 9323.61 36.42 0.00 0.00 13713.26 829.07 7061253.96 01:19:02.003 11:16:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:19:02.003 11:16:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 01:19:02.003 11:16:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:19:02.003 11:16:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 01:19:02.003 11:16:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 01:19:02.003 11:16:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 01:19:02.003 11:16:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:19:02.003 11:16:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 01:19:02.003 11:16:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 01:19:02.003 11:16:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:19:02.003 rmmod nvme_tcp 01:19:02.003 rmmod nvme_fabrics 01:19:02.003 rmmod nvme_keyring 01:19:02.003 11:16:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:19:02.003 11:16:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 01:19:02.003 11:16:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 01:19:02.003 11:16:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 95594 ']' 01:19:02.003 11:16:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 95594 01:19:02.003 11:16:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 95594 ']' 01:19:02.003 11:16:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 95594 01:19:02.003 11:16:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 01:19:02.003 11:16:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:19:02.003 11:16:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95594 01:19:02.003 11:16:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:19:02.003 11:16:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:19:02.003 killing process with pid 95594 01:19:02.003 11:16:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95594' 01:19:02.003 11:16:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 95594 01:19:02.003 11:16:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 95594 01:19:02.003 11:16:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:19:02.003 11:16:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:19:02.003 11:16:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:19:02.003 11:16:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:19:02.003 11:16:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 01:19:02.003 11:16:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:19:02.003 11:16:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:19:02.003 11:16:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:19:02.003 11:16:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:19:02.003 01:19:02.003 real 0m59.841s 01:19:02.003 user 2m39.782s 01:19:02.003 sys 0m23.378s 01:19:02.003 11:16:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 01:19:02.003 ************************************ 01:19:02.003 END TEST nvmf_host_multipath 01:19:02.003 11:16:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:19:02.003 ************************************ 01:19:02.262 11:16:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:19:02.262 11:16:07 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 01:19:02.262 11:16:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:19:02.262 11:16:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:19:02.262 11:16:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:19:02.262 ************************************ 01:19:02.262 START TEST nvmf_timeout 01:19:02.262 ************************************ 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 01:19:02.262 * Looking for test storage... 01:19:02.262 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:19:02.262 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:19:02.520 Cannot find device "nvmf_tgt_br" 01:19:02.520 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 01:19:02.520 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:19:02.520 Cannot find device "nvmf_tgt_br2" 01:19:02.520 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 01:19:02.520 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:19:02.520 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:19:02.520 Cannot find device "nvmf_tgt_br" 01:19:02.520 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 01:19:02.520 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:19:02.520 Cannot find device "nvmf_tgt_br2" 01:19:02.520 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 01:19:02.520 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:19:02.520 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:19:02.520 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:19:02.520 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:19:02.520 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 01:19:02.520 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:19:02.520 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:19:02.520 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 01:19:02.520 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:19:02.520 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:19:02.520 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:19:02.520 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:19:02.520 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:19:02.520 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:19:02.777 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:19:02.777 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 01:19:02.777 01:19:02.777 --- 10.0.0.2 ping statistics --- 01:19:02.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:02.777 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:19:02.777 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:19:02.777 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 01:19:02.777 01:19:02.777 --- 10.0.0.3 ping statistics --- 01:19:02.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:02.777 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:19:02.777 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:19:02.777 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 01:19:02.777 01:19:02.777 --- 10.0.0.1 ping statistics --- 01:19:02.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:02.777 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=96753 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 96753 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96753 ']' 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:19:02.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 01:19:02.777 11:16:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:19:03.041 [2024-07-22 11:16:07.997271] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:19:03.041 [2024-07-22 11:16:07.997351] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:19:03.041 [2024-07-22 11:16:08.128993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 01:19:03.041 [2024-07-22 11:16:08.178207] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:19:03.041 [2024-07-22 11:16:08.178268] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:19:03.041 [2024-07-22 11:16:08.178286] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:19:03.041 [2024-07-22 11:16:08.178297] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:19:03.041 [2024-07-22 11:16:08.178305] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:19:03.041 [2024-07-22 11:16:08.178440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:19:03.041 [2024-07-22 11:16:08.178442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:19:03.041 [2024-07-22 11:16:08.221819] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:19:03.973 11:16:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:19:03.973 11:16:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 01:19:03.973 11:16:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:19:03.973 11:16:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 01:19:03.973 11:16:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:19:03.973 11:16:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:19:03.973 11:16:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:19:03.973 11:16:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:19:03.973 [2024-07-22 11:16:09.176180] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:19:04.231 11:16:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:19:04.231 Malloc0 01:19:04.231 11:16:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:19:04.489 11:16:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:19:04.747 11:16:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:19:05.005 [2024-07-22 11:16:10.018773] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:19:05.005 11:16:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=96799 01:19:05.005 11:16:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 01:19:05.005 11:16:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 96799 /var/tmp/bdevperf.sock 01:19:05.005 11:16:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96799 ']' 01:19:05.005 11:16:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:19:05.005 11:16:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 01:19:05.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:19:05.005 11:16:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:19:05.005 11:16:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 01:19:05.005 11:16:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:19:05.005 [2024-07-22 11:16:10.091765] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:19:05.005 [2024-07-22 11:16:10.091894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96799 ] 01:19:05.263 [2024-07-22 11:16:10.236772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:19:05.264 [2024-07-22 11:16:10.309008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:19:05.264 [2024-07-22 11:16:10.382930] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:19:05.829 11:16:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:19:05.829 11:16:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 01:19:05.829 11:16:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 01:19:06.086 11:16:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 01:19:06.344 NVMe0n1 01:19:06.344 11:16:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=96817 01:19:06.344 11:16:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:19:06.344 11:16:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 01:19:06.344 Running I/O for 10 seconds... 01:19:07.280 11:16:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:19:07.542 [2024-07-22 11:16:12.597335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.542 [2024-07-22 11:16:12.597405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.542 [2024-07-22 11:16:12.597431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.542 [2024-07-22 11:16:12.597441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.542 [2024-07-22 11:16:12.597453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.542 [2024-07-22 11:16:12.597463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.542 [2024-07-22 11:16:12.597482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.542 [2024-07-22 11:16:12.597491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.542 [2024-07-22 11:16:12.597501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.542 [2024-07-22 11:16:12.597511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.542 [2024-07-22 11:16:12.597521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.542 [2024-07-22 11:16:12.597530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.542 [2024-07-22 11:16:12.597541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.542 [2024-07-22 11:16:12.597566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.542 [2024-07-22 11:16:12.597577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.542 [2024-07-22 11:16:12.597587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.542 [2024-07-22 11:16:12.597598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.542 [2024-07-22 11:16:12.597607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.542 [2024-07-22 11:16:12.597618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.542 [2024-07-22 11:16:12.597628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.542 [2024-07-22 11:16:12.597639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.542 [2024-07-22 11:16:12.597648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.542 [2024-07-22 11:16:12.597658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.542 [2024-07-22 11:16:12.597667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.542 [2024-07-22 11:16:12.597678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.542 [2024-07-22 11:16:12.597686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.542 [2024-07-22 11:16:12.597697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.542 [2024-07-22 11:16:12.597706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.542 [2024-07-22 11:16:12.597717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.542 [2024-07-22 11:16:12.597726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.542 [2024-07-22 11:16:12.597736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.542 [2024-07-22 11:16:12.597745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.542 [2024-07-22 11:16:12.597756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.542 [2024-07-22 11:16:12.597765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.542 [2024-07-22 11:16:12.597779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.542 [2024-07-22 11:16:12.597788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.542 [2024-07-22 11:16:12.597799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.542 [2024-07-22 11:16:12.597808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.542 [2024-07-22 11:16:12.597818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.542 [2024-07-22 11:16:12.597827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.542 [2024-07-22 11:16:12.597838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.542 [2024-07-22 11:16:12.597846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.542 [2024-07-22 11:16:12.597857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.542 [2024-07-22 11:16:12.597880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.542 [2024-07-22 11:16:12.597891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.542 [2024-07-22 11:16:12.597900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.542 [2024-07-22 11:16:12.597927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.542 [2024-07-22 11:16:12.597936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.542 [2024-07-22 11:16:12.597947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.542 [2024-07-22 11:16:12.597957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.542 [2024-07-22 11:16:12.597967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.542 [2024-07-22 11:16:12.597989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.542 [2024-07-22 11:16:12.598001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.542 [2024-07-22 11:16:12.598011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.542 [2024-07-22 11:16:12.598022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.542 [2024-07-22 11:16:12.598031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.542 [2024-07-22 11:16:12.598042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.542 [2024-07-22 11:16:12.598051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.542 [2024-07-22 11:16:12.598062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.542 [2024-07-22 11:16:12.598071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.542 [2024-07-22 11:16:12.598094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.542 [2024-07-22 11:16:12.598103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.542 [2024-07-22 11:16:12.598131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.542 [2024-07-22 11:16:12.598140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.542 [2024-07-22 11:16:12.598151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.542 [2024-07-22 11:16:12.598160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.542 [2024-07-22 11:16:12.598172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.543 [2024-07-22 11:16:12.598181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.543 [2024-07-22 11:16:12.598201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.543 [2024-07-22 11:16:12.598221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.543 [2024-07-22 11:16:12.598241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.543 [2024-07-22 11:16:12.598263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.543 [2024-07-22 11:16:12.598283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.543 [2024-07-22 11:16:12.598303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.543 [2024-07-22 11:16:12.598322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.543 [2024-07-22 11:16:12.598342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.543 [2024-07-22 11:16:12.598374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.543 [2024-07-22 11:16:12.598394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.543 [2024-07-22 11:16:12.598413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.543 [2024-07-22 11:16:12.598432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.543 [2024-07-22 11:16:12.598451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.543 [2024-07-22 11:16:12.598470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.543 [2024-07-22 11:16:12.598489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.543 [2024-07-22 11:16:12.598509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.543 [2024-07-22 11:16:12.598529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.543 [2024-07-22 11:16:12.598548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.543 [2024-07-22 11:16:12.598567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.543 [2024-07-22 11:16:12.598598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.543 [2024-07-22 11:16:12.598617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.543 [2024-07-22 11:16:12.598635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.543 [2024-07-22 11:16:12.598653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.543 [2024-07-22 11:16:12.598672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.543 [2024-07-22 11:16:12.598691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.543 [2024-07-22 11:16:12.598709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.543 [2024-07-22 11:16:12.598727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.543 [2024-07-22 11:16:12.598745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.543 [2024-07-22 11:16:12.598766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.543 [2024-07-22 11:16:12.598784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.543 [2024-07-22 11:16:12.598802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.543 [2024-07-22 11:16:12.598821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.543 [2024-07-22 11:16:12.598839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.543 [2024-07-22 11:16:12.598857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.543 [2024-07-22 11:16:12.598875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.543 [2024-07-22 11:16:12.598893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.543 [2024-07-22 11:16:12.598931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.543 [2024-07-22 11:16:12.598950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.543 [2024-07-22 11:16:12.598969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.543 [2024-07-22 11:16:12.598987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.598997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.543 [2024-07-22 11:16:12.599005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.543 [2024-07-22 11:16:12.599014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.543 [2024-07-22 11:16:12.599023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.544 [2024-07-22 11:16:12.599041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.544 [2024-07-22 11:16:12.599059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.544 [2024-07-22 11:16:12.599077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.544 [2024-07-22 11:16:12.599095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.544 [2024-07-22 11:16:12.599113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.544 [2024-07-22 11:16:12.599144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.544 [2024-07-22 11:16:12.599162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.544 [2024-07-22 11:16:12.599181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.544 [2024-07-22 11:16:12.599199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.544 [2024-07-22 11:16:12.599218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.544 [2024-07-22 11:16:12.599253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.544 [2024-07-22 11:16:12.599272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.544 [2024-07-22 11:16:12.599303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.544 [2024-07-22 11:16:12.599321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.544 [2024-07-22 11:16:12.599340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.544 [2024-07-22 11:16:12.599358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.544 [2024-07-22 11:16:12.599376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.544 [2024-07-22 11:16:12.599395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.544 [2024-07-22 11:16:12.599413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.544 [2024-07-22 11:16:12.599431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.544 [2024-07-22 11:16:12.599449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.544 [2024-07-22 11:16:12.599472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.544 [2024-07-22 11:16:12.599490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.544 [2024-07-22 11:16:12.599508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.544 [2024-07-22 11:16:12.599527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.544 [2024-07-22 11:16:12.599546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.544 [2024-07-22 11:16:12.599563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:07.544 [2024-07-22 11:16:12.599582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.544 [2024-07-22 11:16:12.599600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.544 [2024-07-22 11:16:12.599618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.544 [2024-07-22 11:16:12.599636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.544 [2024-07-22 11:16:12.599654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.544 [2024-07-22 11:16:12.599672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.544 [2024-07-22 11:16:12.599690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.544 [2024-07-22 11:16:12.599713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:82584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.544 [2024-07-22 11:16:12.599741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.544 [2024-07-22 11:16:12.599760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.544 [2024-07-22 11:16:12.599783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.544 [2024-07-22 11:16:12.599802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.544 [2024-07-22 11:16:12.599820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:07.544 [2024-07-22 11:16:12.599838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.544 [2024-07-22 11:16:12.599848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155de30 is same with the state(5) to be set 01:19:07.544 [2024-07-22 11:16:12.599860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:07.544 [2024-07-22 11:16:12.599868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:07.545 [2024-07-22 11:16:12.599883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82632 len:8 PRP1 0x0 PRP2 0x0 01:19:07.545 [2024-07-22 11:16:12.599892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.545 [2024-07-22 11:16:12.599903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:07.545 [2024-07-22 11:16:12.599909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:07.545 [2024-07-22 11:16:12.599917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82640 len:8 PRP1 0x0 PRP2 0x0 01:19:07.545 [2024-07-22 11:16:12.599925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.545 [2024-07-22 11:16:12.599934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:07.545 [2024-07-22 11:16:12.599941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:07.545 [2024-07-22 11:16:12.599948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82648 len:8 PRP1 0x0 PRP2 0x0 01:19:07.545 [2024-07-22 11:16:12.599957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.545 [2024-07-22 11:16:12.599965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:07.545 [2024-07-22 11:16:12.599972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:07.545 [2024-07-22 11:16:12.599979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83104 len:8 PRP1 0x0 PRP2 0x0 01:19:07.545 [2024-07-22 11:16:12.599988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.545 [2024-07-22 11:16:12.599996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:07.545 [2024-07-22 11:16:12.600003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:07.545 [2024-07-22 11:16:12.600010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83112 len:8 PRP1 0x0 PRP2 0x0 01:19:07.545 [2024-07-22 11:16:12.600018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.545 [2024-07-22 11:16:12.600027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:07.545 [2024-07-22 11:16:12.600034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:07.545 [2024-07-22 11:16:12.600041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83120 len:8 PRP1 0x0 PRP2 0x0 01:19:07.545 [2024-07-22 11:16:12.600053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.545 [2024-07-22 11:16:12.600062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:07.545 [2024-07-22 11:16:12.600069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:07.545 [2024-07-22 11:16:12.600076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83128 len:8 PRP1 0x0 PRP2 0x0 01:19:07.545 [2024-07-22 11:16:12.600085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.545 [2024-07-22 11:16:12.600094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:07.545 [2024-07-22 11:16:12.600100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:07.545 [2024-07-22 11:16:12.600107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83136 len:8 PRP1 0x0 PRP2 0x0 01:19:07.545 [2024-07-22 11:16:12.600116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.545 [2024-07-22 11:16:12.600124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:07.545 [2024-07-22 11:16:12.600131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:07.545 [2024-07-22 11:16:12.600138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83144 len:8 PRP1 0x0 PRP2 0x0 01:19:07.545 [2024-07-22 11:16:12.600146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.545 [2024-07-22 11:16:12.600155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:07.545 [2024-07-22 11:16:12.600161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:07.545 [2024-07-22 11:16:12.600168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83152 len:8 PRP1 0x0 PRP2 0x0 01:19:07.545 [2024-07-22 11:16:12.600177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.545 [2024-07-22 11:16:12.600186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:07.545 [2024-07-22 11:16:12.600193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:07.545 [2024-07-22 11:16:12.600200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83160 len:8 PRP1 0x0 PRP2 0x0 01:19:07.545 [2024-07-22 11:16:12.600209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.545 [2024-07-22 11:16:12.600270] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x155de30 was disconnected and freed. reset controller. 01:19:07.545 [2024-07-22 11:16:12.607099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:19:07.545 [2024-07-22 11:16:12.607130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.545 [2024-07-22 11:16:12.607143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:19:07.545 [2024-07-22 11:16:12.607152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.545 [2024-07-22 11:16:12.607162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:19:07.545 [2024-07-22 11:16:12.607170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.545 [2024-07-22 11:16:12.607180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:19:07.545 [2024-07-22 11:16:12.607189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:07.545 [2024-07-22 11:16:12.607198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1528fa0 is same with the state(5) to be set 01:19:07.545 [2024-07-22 11:16:12.607377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:19:07.545 [2024-07-22 11:16:12.607398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1528fa0 (9): Bad file descriptor 01:19:07.545 [2024-07-22 11:16:12.607506] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:19:07.545 [2024-07-22 11:16:12.607528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1528fa0 with addr=10.0.0.2, port=4420 01:19:07.545 [2024-07-22 11:16:12.607538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1528fa0 is same with the state(5) to be set 01:19:07.545 [2024-07-22 11:16:12.607552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1528fa0 (9): Bad file descriptor 01:19:07.545 [2024-07-22 11:16:12.607566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:19:07.545 [2024-07-22 11:16:12.607575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:19:07.545 [2024-07-22 11:16:12.607586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:19:07.545 [2024-07-22 11:16:12.607603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:19:07.545 [2024-07-22 11:16:12.607612] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:19:07.545 11:16:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 01:19:09.447 [2024-07-22 11:16:14.604743] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:19:09.447 [2024-07-22 11:16:14.604838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1528fa0 with addr=10.0.0.2, port=4420 01:19:09.447 [2024-07-22 11:16:14.604865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1528fa0 is same with the state(5) to be set 01:19:09.447 [2024-07-22 11:16:14.604899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1528fa0 (9): Bad file descriptor 01:19:09.447 [2024-07-22 11:16:14.604918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:19:09.447 [2024-07-22 11:16:14.604928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:19:09.447 [2024-07-22 11:16:14.604942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:19:09.447 [2024-07-22 11:16:14.604972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:19:09.447 [2024-07-22 11:16:14.604982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:19:09.447 11:16:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 01:19:09.447 11:16:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 01:19:09.447 11:16:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:19:09.716 11:16:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 01:19:09.716 11:16:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 01:19:09.716 11:16:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 01:19:09.716 11:16:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 01:19:09.974 11:16:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 01:19:09.974 11:16:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 01:19:11.873 [2024-07-22 11:16:16.602118] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:19:11.873 [2024-07-22 11:16:16.602223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1528fa0 with addr=10.0.0.2, port=4420 01:19:11.873 [2024-07-22 11:16:16.602252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1528fa0 is same with the state(5) to be set 01:19:11.873 [2024-07-22 11:16:16.602295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1528fa0 (9): Bad file descriptor 01:19:11.873 [2024-07-22 11:16:16.602327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:19:11.873 [2024-07-22 11:16:16.602348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:19:11.873 [2024-07-22 11:16:16.602370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:19:11.873 [2024-07-22 11:16:16.602411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:19:11.873 [2024-07-22 11:16:16.602432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:19:13.778 [2024-07-22 11:16:18.599250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:19:13.778 [2024-07-22 11:16:18.599320] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:19:13.778 [2024-07-22 11:16:18.599333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:19:13.778 [2024-07-22 11:16:18.599344] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 01:19:13.778 [2024-07-22 11:16:18.599367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:19:14.405 01:19:14.405 Latency(us) 01:19:14.405 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:19:14.405 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:19:14.405 Verification LBA range: start 0x0 length 0x4000 01:19:14.405 NVMe0n1 : 8.11 1266.38 4.95 15.79 0.00 100041.35 2579.33 7061253.96 01:19:14.405 =================================================================================================================== 01:19:14.405 Total : 1266.38 4.95 15.79 0.00 100041.35 2579.33 7061253.96 01:19:14.405 0 01:19:14.975 11:16:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 01:19:14.975 11:16:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:19:14.975 11:16:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 01:19:15.234 11:16:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 01:19:15.234 11:16:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 01:19:15.234 11:16:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 01:19:15.234 11:16:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 01:19:15.493 11:16:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 01:19:15.493 11:16:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 96817 01:19:15.493 11:16:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 96799 01:19:15.493 11:16:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96799 ']' 01:19:15.493 11:16:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96799 01:19:15.493 11:16:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 01:19:15.493 11:16:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:19:15.493 11:16:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96799 01:19:15.493 killing process with pid 96799 01:19:15.493 Received shutdown signal, test time was about 9.035114 seconds 01:19:15.493 01:19:15.493 Latency(us) 01:19:15.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:19:15.493 =================================================================================================================== 01:19:15.493 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:19:15.493 11:16:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:19:15.493 11:16:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:19:15.493 11:16:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96799' 01:19:15.493 11:16:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96799 01:19:15.493 11:16:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96799 01:19:15.752 11:16:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:19:16.010 [2024-07-22 11:16:21.006976] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:19:16.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:19:16.010 11:16:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=96933 01:19:16.010 11:16:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 01:19:16.011 11:16:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 96933 /var/tmp/bdevperf.sock 01:19:16.011 11:16:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96933 ']' 01:19:16.011 11:16:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:19:16.011 11:16:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 01:19:16.011 11:16:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:19:16.011 11:16:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 01:19:16.011 11:16:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:19:16.011 [2024-07-22 11:16:21.076119] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:19:16.011 [2024-07-22 11:16:21.076198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96933 ] 01:19:16.271 [2024-07-22 11:16:21.220103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:19:16.271 [2024-07-22 11:16:21.296083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:19:16.271 [2024-07-22 11:16:21.370833] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:19:16.838 11:16:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:19:16.838 11:16:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 01:19:16.838 11:16:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 01:19:17.097 11:16:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 01:19:17.356 NVMe0n1 01:19:17.356 11:16:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=96961 01:19:17.356 11:16:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:19:17.356 11:16:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 01:19:17.356 Running I/O for 10 seconds... 01:19:18.290 11:16:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:19:18.551 [2024-07-22 11:16:23.602990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:101384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.551 [2024-07-22 11:16:23.603057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.551 [2024-07-22 11:16:23.603082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.551 [2024-07-22 11:16:23.603092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.551 [2024-07-22 11:16:23.603104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.551 [2024-07-22 11:16:23.603113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.551 [2024-07-22 11:16:23.603124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.551 [2024-07-22 11:16:23.603134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.551 [2024-07-22 11:16:23.603144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:101416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.551 [2024-07-22 11:16:23.603154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.551 [2024-07-22 11:16:23.603164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.551 [2024-07-22 11:16:23.603173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.551 [2024-07-22 11:16:23.603184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.551 [2024-07-22 11:16:23.603192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.551 [2024-07-22 11:16:23.603203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:101440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.551 [2024-07-22 11:16:23.603213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.551 [2024-07-22 11:16:23.603224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.551 [2024-07-22 11:16:23.603233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.551 [2024-07-22 11:16:23.603243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.551 [2024-07-22 11:16:23.603252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.551 [2024-07-22 11:16:23.603263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.551 [2024-07-22 11:16:23.603272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.551 [2024-07-22 11:16:23.603283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.551 [2024-07-22 11:16:23.603292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.551 [2024-07-22 11:16:23.603303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.551 [2024-07-22 11:16:23.603312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.551 [2024-07-22 11:16:23.603335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.551 [2024-07-22 11:16:23.603345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.551 [2024-07-22 11:16:23.603356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.551 [2024-07-22 11:16:23.603365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.551 [2024-07-22 11:16:23.603376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.551 [2024-07-22 11:16:23.603384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.551 [2024-07-22 11:16:23.603395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.551 [2024-07-22 11:16:23.603403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.551 [2024-07-22 11:16:23.603415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.551 [2024-07-22 11:16:23.603424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.551 [2024-07-22 11:16:23.603434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:100952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.551 [2024-07-22 11:16:23.603442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.603452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:100960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.552 [2024-07-22 11:16:23.603461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.603471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.552 [2024-07-22 11:16:23.603479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.603489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:100976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.552 [2024-07-22 11:16:23.603498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.603508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.552 [2024-07-22 11:16:23.603516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.603526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.552 [2024-07-22 11:16:23.603534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.603544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.552 [2024-07-22 11:16:23.603553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.603563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.552 [2024-07-22 11:16:23.603573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.603582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.552 [2024-07-22 11:16:23.603591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.603601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:101536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.552 [2024-07-22 11:16:23.603609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.603619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.552 [2024-07-22 11:16:23.603628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.603637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.552 [2024-07-22 11:16:23.603646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.603656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.552 [2024-07-22 11:16:23.603664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.603674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:101568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.552 [2024-07-22 11:16:23.603683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.603692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.552 [2024-07-22 11:16:23.603701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.603712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.552 [2024-07-22 11:16:23.603721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.603731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.552 [2024-07-22 11:16:23.603740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.603750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.552 [2024-07-22 11:16:23.603758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.603768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.552 [2024-07-22 11:16:23.603776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.603786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:101616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.552 [2024-07-22 11:16:23.603794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.603805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.552 [2024-07-22 11:16:23.603813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.603823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.552 [2024-07-22 11:16:23.603832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.603842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.552 [2024-07-22 11:16:23.603862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.603873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.552 [2024-07-22 11:16:23.603882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.603892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.552 [2024-07-22 11:16:23.603901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.603911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.552 [2024-07-22 11:16:23.603919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.603929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.552 [2024-07-22 11:16:23.603939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.603950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.552 [2024-07-22 11:16:23.603972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.603982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:101016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.552 [2024-07-22 11:16:23.603991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.604003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.552 [2024-07-22 11:16:23.604011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.604022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.552 [2024-07-22 11:16:23.604030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.604041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.552 [2024-07-22 11:16:23.604049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.604060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.552 [2024-07-22 11:16:23.604070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.604080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:101056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.552 [2024-07-22 11:16:23.604088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.604099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:101064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.552 [2024-07-22 11:16:23.604107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.604117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:101072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.552 [2024-07-22 11:16:23.604126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.604136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.552 [2024-07-22 11:16:23.604145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.604155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.552 [2024-07-22 11:16:23.604165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.604176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:101096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.552 [2024-07-22 11:16:23.604185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.604195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:101104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.552 [2024-07-22 11:16:23.604203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.604213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.552 [2024-07-22 11:16:23.604222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.604232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.552 [2024-07-22 11:16:23.604240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.552 [2024-07-22 11:16:23.604250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.552 [2024-07-22 11:16:23.604259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.553 [2024-07-22 11:16:23.604277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.553 [2024-07-22 11:16:23.604295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.553 [2024-07-22 11:16:23.604314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.553 [2024-07-22 11:16:23.604333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.553 [2024-07-22 11:16:23.604354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.553 [2024-07-22 11:16:23.604373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.553 [2024-07-22 11:16:23.604392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:101736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.553 [2024-07-22 11:16:23.604411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.553 [2024-07-22 11:16:23.604429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.553 [2024-07-22 11:16:23.604447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.553 [2024-07-22 11:16:23.604474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.553 [2024-07-22 11:16:23.604492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.553 [2024-07-22 11:16:23.604512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.553 [2024-07-22 11:16:23.604531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:101136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.553 [2024-07-22 11:16:23.604549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.553 [2024-07-22 11:16:23.604568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.553 [2024-07-22 11:16:23.604588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.553 [2024-07-22 11:16:23.604607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:101168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.553 [2024-07-22 11:16:23.604625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:101176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.553 [2024-07-22 11:16:23.604644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.553 [2024-07-22 11:16:23.604663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.553 [2024-07-22 11:16:23.604681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.553 [2024-07-22 11:16:23.604699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:101800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.553 [2024-07-22 11:16:23.604717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.553 [2024-07-22 11:16:23.604735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.553 [2024-07-22 11:16:23.604758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.553 [2024-07-22 11:16:23.604778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.553 [2024-07-22 11:16:23.604796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.553 [2024-07-22 11:16:23.604814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:101848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.553 [2024-07-22 11:16:23.604832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.553 [2024-07-22 11:16:23.604863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.553 [2024-07-22 11:16:23.604881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.553 [2024-07-22 11:16:23.604899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.553 [2024-07-22 11:16:23.604917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.553 [2024-07-22 11:16:23.604935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.553 [2024-07-22 11:16:23.604953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.553 [2024-07-22 11:16:23.604973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.604983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:101208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.553 [2024-07-22 11:16:23.604992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.605003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:101216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.553 [2024-07-22 11:16:23.605012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.605022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.553 [2024-07-22 11:16:23.605031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.605041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.553 [2024-07-22 11:16:23.605049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.605059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:101240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.553 [2024-07-22 11:16:23.605067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.553 [2024-07-22 11:16:23.605077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.554 [2024-07-22 11:16:23.605087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.554 [2024-07-22 11:16:23.605097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.554 [2024-07-22 11:16:23.605106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.554 [2024-07-22 11:16:23.605116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:101264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.554 [2024-07-22 11:16:23.605125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.554 [2024-07-22 11:16:23.605134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:101272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.554 [2024-07-22 11:16:23.605143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.554 [2024-07-22 11:16:23.605153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.554 [2024-07-22 11:16:23.605161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.554 [2024-07-22 11:16:23.605171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.554 [2024-07-22 11:16:23.605181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.554 [2024-07-22 11:16:23.605197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.554 [2024-07-22 11:16:23.605212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.554 [2024-07-22 11:16:23.605226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:101304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.554 [2024-07-22 11:16:23.605235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.554 [2024-07-22 11:16:23.605245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.554 [2024-07-22 11:16:23.605253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.554 [2024-07-22 11:16:23.605263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.554 [2024-07-22 11:16:23.605272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.554 [2024-07-22 11:16:23.605283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:101328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.554 [2024-07-22 11:16:23.605291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.554 [2024-07-22 11:16:23.605301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:101336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.554 [2024-07-22 11:16:23.605311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.554 [2024-07-22 11:16:23.605321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.554 [2024-07-22 11:16:23.605330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.554 [2024-07-22 11:16:23.605340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.554 [2024-07-22 11:16:23.605349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.554 [2024-07-22 11:16:23.605360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.554 [2024-07-22 11:16:23.605368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.554 [2024-07-22 11:16:23.605379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.554 [2024-07-22 11:16:23.605387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.554 [2024-07-22 11:16:23.605398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:101376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:18.554 [2024-07-22 11:16:23.605409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.554 [2024-07-22 11:16:23.605419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.554 [2024-07-22 11:16:23.605428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.554 [2024-07-22 11:16:23.605437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.554 [2024-07-22 11:16:23.605446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.554 [2024-07-22 11:16:23.605455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.554 [2024-07-22 11:16:23.605464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.554 [2024-07-22 11:16:23.605474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.554 [2024-07-22 11:16:23.605490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.554 [2024-07-22 11:16:23.605500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.554 [2024-07-22 11:16:23.605509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.554 [2024-07-22 11:16:23.605519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.554 [2024-07-22 11:16:23.605528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.554 [2024-07-22 11:16:23.605538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:18.554 [2024-07-22 11:16:23.605547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.554 [2024-07-22 11:16:23.605580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:18.554 [2024-07-22 11:16:23.605588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:18.554 [2024-07-22 11:16:23.605596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101952 len:8 PRP1 0x0 PRP2 0x0 01:19:18.554 [2024-07-22 11:16:23.605605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:18.554 [2024-07-22 11:16:23.605673] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6f7e30 was disconnected and freed. reset controller. 01:19:18.554 [2024-07-22 11:16:23.605913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:19:18.554 [2024-07-22 11:16:23.605985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c2fa0 (9): Bad file descriptor 01:19:18.554 [2024-07-22 11:16:23.606089] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:19:18.554 [2024-07-22 11:16:23.606104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c2fa0 with addr=10.0.0.2, port=4420 01:19:18.554 [2024-07-22 11:16:23.606114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c2fa0 is same with the state(5) to be set 01:19:18.554 [2024-07-22 11:16:23.606128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c2fa0 (9): Bad file descriptor 01:19:18.554 [2024-07-22 11:16:23.606142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:19:18.554 [2024-07-22 11:16:23.606151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:19:18.554 [2024-07-22 11:16:23.606163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:19:18.554 [2024-07-22 11:16:23.606181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:19:18.554 [2024-07-22 11:16:23.606190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:19:18.554 11:16:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 01:19:19.489 [2024-07-22 11:16:24.604752] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:19:19.489 [2024-07-22 11:16:24.604835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c2fa0 with addr=10.0.0.2, port=4420 01:19:19.489 [2024-07-22 11:16:24.604860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c2fa0 is same with the state(5) to be set 01:19:19.489 [2024-07-22 11:16:24.604892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c2fa0 (9): Bad file descriptor 01:19:19.489 [2024-07-22 11:16:24.604910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:19:19.489 [2024-07-22 11:16:24.604920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:19:19.489 [2024-07-22 11:16:24.604933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:19:19.489 [2024-07-22 11:16:24.604962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:19:19.489 [2024-07-22 11:16:24.604973] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:19:19.489 11:16:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:19:19.747 [2024-07-22 11:16:24.804138] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:19:19.747 11:16:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 96961 01:19:20.679 [2024-07-22 11:16:25.616784] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 01:19:28.790 01:19:28.790 Latency(us) 01:19:28.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:19:28.790 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:19:28.790 Verification LBA range: start 0x0 length 0x4000 01:19:28.790 NVMe0n1 : 10.01 8433.68 32.94 0.00 0.00 15152.58 1256.76 3018551.31 01:19:28.790 =================================================================================================================== 01:19:28.790 Total : 8433.68 32.94 0.00 0.00 15152.58 1256.76 3018551.31 01:19:28.790 0 01:19:28.790 11:16:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=97067 01:19:28.790 11:16:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:19:28.790 11:16:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 01:19:28.790 Running I/O for 10 seconds... 01:19:28.790 11:16:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:19:28.790 [2024-07-22 11:16:33.694419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12da460 is same with the state(5) to be set 01:19:28.790 [2024-07-22 11:16:33.694477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12da460 is same with the state(5) to be set 01:19:28.790 [2024-07-22 11:16:33.694488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12da460 is same with the state(5) to be set 01:19:28.790 [2024-07-22 11:16:33.694496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12da460 is same with the state(5) to be set 01:19:28.790 [2024-07-22 11:16:33.694504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12da460 is same with the state(5) to be set 01:19:28.790 [2024-07-22 11:16:33.694829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.790 [2024-07-22 11:16:33.694872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.790 [2024-07-22 11:16:33.694892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.790 [2024-07-22 11:16:33.694902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.790 [2024-07-22 11:16:33.694913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.790 [2024-07-22 11:16:33.694921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.790 [2024-07-22 11:16:33.694933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.790 [2024-07-22 11:16:33.694941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.790 [2024-07-22 11:16:33.694951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.790 [2024-07-22 11:16:33.694959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.790 [2024-07-22 11:16:33.694969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.790 [2024-07-22 11:16:33.694978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.790 [2024-07-22 11:16:33.694989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.790 [2024-07-22 11:16:33.694998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.790 [2024-07-22 11:16:33.695008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.790 [2024-07-22 11:16:33.695017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.790 [2024-07-22 11:16:33.695027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.790 [2024-07-22 11:16:33.695035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.790 [2024-07-22 11:16:33.695046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.790 [2024-07-22 11:16:33.695054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.790 [2024-07-22 11:16:33.695064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.790 [2024-07-22 11:16:33.695072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.790 [2024-07-22 11:16:33.695082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.790 [2024-07-22 11:16:33.695091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.790 [2024-07-22 11:16:33.695100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.790 [2024-07-22 11:16:33.695109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.790 [2024-07-22 11:16:33.695118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.790 [2024-07-22 11:16:33.695127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.790 [2024-07-22 11:16:33.695137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.790 [2024-07-22 11:16:33.695145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.790 [2024-07-22 11:16:33.695155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.790 [2024-07-22 11:16:33.695163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.791 [2024-07-22 11:16:33.695183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.791 [2024-07-22 11:16:33.695202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.791 [2024-07-22 11:16:33.695220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.791 [2024-07-22 11:16:33.695239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.791 [2024-07-22 11:16:33.695259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.791 [2024-07-22 11:16:33.695277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.791 [2024-07-22 11:16:33.695295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.791 [2024-07-22 11:16:33.695313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.791 [2024-07-22 11:16:33.695331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.791 [2024-07-22 11:16:33.695350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.791 [2024-07-22 11:16:33.695368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.791 [2024-07-22 11:16:33.695386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.791 [2024-07-22 11:16:33.695405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.791 [2024-07-22 11:16:33.695423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.791 [2024-07-22 11:16:33.695441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.791 [2024-07-22 11:16:33.695459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.791 [2024-07-22 11:16:33.695478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.791 [2024-07-22 11:16:33.695496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.791 [2024-07-22 11:16:33.695514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.791 [2024-07-22 11:16:33.695533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.791 [2024-07-22 11:16:33.695551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.791 [2024-07-22 11:16:33.695569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.791 [2024-07-22 11:16:33.695587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.791 [2024-07-22 11:16:33.695605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.791 [2024-07-22 11:16:33.695623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.791 [2024-07-22 11:16:33.695641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.791 [2024-07-22 11:16:33.695659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.791 [2024-07-22 11:16:33.695677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.791 [2024-07-22 11:16:33.695695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.791 [2024-07-22 11:16:33.695714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.791 [2024-07-22 11:16:33.695732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.791 [2024-07-22 11:16:33.695750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.791 [2024-07-22 11:16:33.695769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.791 [2024-07-22 11:16:33.695787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.791 [2024-07-22 11:16:33.695805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.791 [2024-07-22 11:16:33.695824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.791 [2024-07-22 11:16:33.695842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.791 [2024-07-22 11:16:33.695869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.791 [2024-07-22 11:16:33.695887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.791 [2024-07-22 11:16:33.695906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.791 [2024-07-22 11:16:33.695924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.791 [2024-07-22 11:16:33.695943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.791 [2024-07-22 11:16:33.695953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.792 [2024-07-22 11:16:33.695961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.695971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.792 [2024-07-22 11:16:33.695979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.695988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.792 [2024-07-22 11:16:33.695997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.792 [2024-07-22 11:16:33.696015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.792 [2024-07-22 11:16:33.696033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.792 [2024-07-22 11:16:33.696051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.792 [2024-07-22 11:16:33.696070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.792 [2024-07-22 11:16:33.696088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.792 [2024-07-22 11:16:33.696107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.792 [2024-07-22 11:16:33.696125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.792 [2024-07-22 11:16:33.696143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.792 [2024-07-22 11:16:33.696161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.792 [2024-07-22 11:16:33.696179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.792 [2024-07-22 11:16:33.696197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.792 [2024-07-22 11:16:33.696215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.792 [2024-07-22 11:16:33.696233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.792 [2024-07-22 11:16:33.696252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.792 [2024-07-22 11:16:33.696270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.792 [2024-07-22 11:16:33.696288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.792 [2024-07-22 11:16:33.696306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.792 [2024-07-22 11:16:33.696324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.792 [2024-07-22 11:16:33.696342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.792 [2024-07-22 11:16:33.696362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.792 [2024-07-22 11:16:33.696381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.792 [2024-07-22 11:16:33.696399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.792 [2024-07-22 11:16:33.696417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.792 [2024-07-22 11:16:33.696435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.792 [2024-07-22 11:16:33.696453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.792 [2024-07-22 11:16:33.696471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.792 [2024-07-22 11:16:33.696489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.792 [2024-07-22 11:16:33.696507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.792 [2024-07-22 11:16:33.696525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.792 [2024-07-22 11:16:33.696543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.792 [2024-07-22 11:16:33.696561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.792 [2024-07-22 11:16:33.696580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.792 [2024-07-22 11:16:33.696598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.792 [2024-07-22 11:16:33.696616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.792 [2024-07-22 11:16:33.696635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.792 [2024-07-22 11:16:33.696657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:28.792 [2024-07-22 11:16:33.696676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.792 [2024-07-22 11:16:33.696695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.792 [2024-07-22 11:16:33.696713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.792 [2024-07-22 11:16:33.696723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.792 [2024-07-22 11:16:33.696731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.793 [2024-07-22 11:16:33.696741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.793 [2024-07-22 11:16:33.696749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.793 [2024-07-22 11:16:33.696759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.793 [2024-07-22 11:16:33.696768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.793 [2024-07-22 11:16:33.696783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.793 [2024-07-22 11:16:33.696791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.793 [2024-07-22 11:16:33.696801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.793 [2024-07-22 11:16:33.696810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.793 [2024-07-22 11:16:33.696819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.793 [2024-07-22 11:16:33.696828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.793 [2024-07-22 11:16:33.696838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.793 [2024-07-22 11:16:33.696853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.793 [2024-07-22 11:16:33.696864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.793 [2024-07-22 11:16:33.696872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.793 [2024-07-22 11:16:33.696883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.793 [2024-07-22 11:16:33.696891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.793 [2024-07-22 11:16:33.696900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.793 [2024-07-22 11:16:33.696909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.793 [2024-07-22 11:16:33.696918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.793 [2024-07-22 11:16:33.696927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.793 [2024-07-22 11:16:33.696937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.793 [2024-07-22 11:16:33.696945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.793 [2024-07-22 11:16:33.696955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:28.793 [2024-07-22 11:16:33.696965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.793 [2024-07-22 11:16:33.696974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f62e0 is same with the state(5) to be set 01:19:28.793 [2024-07-22 11:16:33.696986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:28.793 [2024-07-22 11:16:33.696993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:28.793 [2024-07-22 11:16:33.697000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100088 len:8 PRP1 0x0 PRP2 0x0 01:19:28.793 [2024-07-22 11:16:33.697009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.793 [2024-07-22 11:16:33.697022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:28.793 [2024-07-22 11:16:33.697032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:28.793 [2024-07-22 11:16:33.697044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100640 len:8 PRP1 0x0 PRP2 0x0 01:19:28.793 [2024-07-22 11:16:33.697053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.793 [2024-07-22 11:16:33.697061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:28.793 [2024-07-22 11:16:33.697068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:28.793 [2024-07-22 11:16:33.697075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100648 len:8 PRP1 0x0 PRP2 0x0 01:19:28.793 [2024-07-22 11:16:33.697085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.793 [2024-07-22 11:16:33.697093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:28.793 [2024-07-22 11:16:33.697100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:28.793 [2024-07-22 11:16:33.697107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100656 len:8 PRP1 0x0 PRP2 0x0 01:19:28.793 [2024-07-22 11:16:33.697116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.793 [2024-07-22 11:16:33.697124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:28.793 [2024-07-22 11:16:33.697131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:28.793 [2024-07-22 11:16:33.697138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100664 len:8 PRP1 0x0 PRP2 0x0 01:19:28.793 [2024-07-22 11:16:33.697146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.793 [2024-07-22 11:16:33.697154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:28.793 [2024-07-22 11:16:33.697161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:28.793 [2024-07-22 11:16:33.697168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100672 len:8 PRP1 0x0 PRP2 0x0 01:19:28.793 [2024-07-22 11:16:33.697177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.793 [2024-07-22 11:16:33.697185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:28.793 [2024-07-22 11:16:33.697192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:28.793 [2024-07-22 11:16:33.697199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100680 len:8 PRP1 0x0 PRP2 0x0 01:19:28.793 [2024-07-22 11:16:33.697207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.793 [2024-07-22 11:16:33.697216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:28.793 [2024-07-22 11:16:33.697222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:28.793 [2024-07-22 11:16:33.697231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100688 len:8 PRP1 0x0 PRP2 0x0 01:19:28.793 [2024-07-22 11:16:33.697239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.793 [2024-07-22 11:16:33.697248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:28.793 [2024-07-22 11:16:33.697255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:28.793 [2024-07-22 11:16:33.697262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100696 len:8 PRP1 0x0 PRP2 0x0 01:19:28.793 [2024-07-22 11:16:33.697270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.793 [2024-07-22 11:16:33.697279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:28.793 [2024-07-22 11:16:33.697286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:28.793 [2024-07-22 11:16:33.697293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100704 len:8 PRP1 0x0 PRP2 0x0 01:19:28.793 [2024-07-22 11:16:33.697301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.793 [2024-07-22 11:16:33.697310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:28.793 [2024-07-22 11:16:33.697317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:28.793 [2024-07-22 11:16:33.697324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100712 len:8 PRP1 0x0 PRP2 0x0 01:19:28.793 [2024-07-22 11:16:33.697333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.793 [2024-07-22 11:16:33.697342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:28.793 [2024-07-22 11:16:33.697348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:28.793 [2024-07-22 11:16:33.697355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100720 len:8 PRP1 0x0 PRP2 0x0 01:19:28.793 [2024-07-22 11:16:33.697364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.793 [2024-07-22 11:16:33.697372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:28.793 [2024-07-22 11:16:33.697379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:28.793 [2024-07-22 11:16:33.697386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100728 len:8 PRP1 0x0 PRP2 0x0 01:19:28.793 [2024-07-22 11:16:33.697394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.793 11:16:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 01:19:28.793 [2024-07-22 11:16:33.716654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:28.793 [2024-07-22 11:16:33.716701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:28.793 [2024-07-22 11:16:33.716719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100736 len:8 PRP1 0x0 PRP2 0x0 01:19:28.793 [2024-07-22 11:16:33.716737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.793 [2024-07-22 11:16:33.716753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:28.793 [2024-07-22 11:16:33.716765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:28.793 [2024-07-22 11:16:33.716778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100744 len:8 PRP1 0x0 PRP2 0x0 01:19:28.793 [2024-07-22 11:16:33.716793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.793 [2024-07-22 11:16:33.716897] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6f62e0 was disconnected and freed. reset controller. 01:19:28.793 [2024-07-22 11:16:33.717064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:19:28.793 [2024-07-22 11:16:33.717087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.793 [2024-07-22 11:16:33.717105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:19:28.793 [2024-07-22 11:16:33.717120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.793 [2024-07-22 11:16:33.717136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:19:28.793 [2024-07-22 11:16:33.717150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.793 [2024-07-22 11:16:33.717165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:19:28.793 [2024-07-22 11:16:33.717180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:28.794 [2024-07-22 11:16:33.717194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c2fa0 is same with the state(5) to be set 01:19:28.794 [2024-07-22 11:16:33.717519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:19:28.794 [2024-07-22 11:16:33.717551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c2fa0 (9): Bad file descriptor 01:19:28.794 [2024-07-22 11:16:33.717672] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:19:28.794 [2024-07-22 11:16:33.717702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c2fa0 with addr=10.0.0.2, port=4420 01:19:28.794 [2024-07-22 11:16:33.717718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c2fa0 is same with the state(5) to be set 01:19:28.794 [2024-07-22 11:16:33.717742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c2fa0 (9): Bad file descriptor 01:19:28.794 [2024-07-22 11:16:33.717763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:19:28.794 [2024-07-22 11:16:33.717778] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:19:28.794 [2024-07-22 11:16:33.717794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:19:28.794 [2024-07-22 11:16:33.717819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:19:28.794 [2024-07-22 11:16:33.717833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:19:29.728 [2024-07-22 11:16:34.716375] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:19:29.728 [2024-07-22 11:16:34.716435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c2fa0 with addr=10.0.0.2, port=4420 01:19:29.728 [2024-07-22 11:16:34.716448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c2fa0 is same with the state(5) to be set 01:19:29.728 [2024-07-22 11:16:34.716472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c2fa0 (9): Bad file descriptor 01:19:29.728 [2024-07-22 11:16:34.716487] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:19:29.728 [2024-07-22 11:16:34.716497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:19:29.728 [2024-07-22 11:16:34.716507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:19:29.728 [2024-07-22 11:16:34.716529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:19:29.728 [2024-07-22 11:16:34.716538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:19:30.664 [2024-07-22 11:16:35.715052] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:19:30.664 [2024-07-22 11:16:35.715114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c2fa0 with addr=10.0.0.2, port=4420 01:19:30.664 [2024-07-22 11:16:35.715129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c2fa0 is same with the state(5) to be set 01:19:30.664 [2024-07-22 11:16:35.715153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c2fa0 (9): Bad file descriptor 01:19:30.664 [2024-07-22 11:16:35.715169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:19:30.664 [2024-07-22 11:16:35.715178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:19:30.664 [2024-07-22 11:16:35.715189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:19:30.664 [2024-07-22 11:16:35.715211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:19:30.664 [2024-07-22 11:16:35.715221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:19:31.599 11:16:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:19:31.599 [2024-07-22 11:16:36.716058] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:19:31.599 [2024-07-22 11:16:36.716104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c2fa0 with addr=10.0.0.2, port=4420 01:19:31.599 [2024-07-22 11:16:36.716117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c2fa0 is same with the state(5) to be set 01:19:31.599 [2024-07-22 11:16:36.716301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c2fa0 (9): Bad file descriptor 01:19:31.599 [2024-07-22 11:16:36.716479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:19:31.599 [2024-07-22 11:16:36.716488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:19:31.599 [2024-07-22 11:16:36.716498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:19:31.599 [2024-07-22 11:16:36.719261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:19:31.599 [2024-07-22 11:16:36.719292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:19:31.858 [2024-07-22 11:16:36.898028] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:19:31.858 11:16:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 97067 01:19:32.823 [2024-07-22 11:16:37.746177] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 01:19:38.098 01:19:38.098 Latency(us) 01:19:38.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:19:38.098 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:19:38.098 Verification LBA range: start 0x0 length 0x4000 01:19:38.098 NVMe0n1 : 10.01 6444.52 25.17 5177.00 0.00 10989.28 470.46 3032026.99 01:19:38.098 =================================================================================================================== 01:19:38.098 Total : 6444.52 25.17 5177.00 0.00 10989.28 0.00 3032026.99 01:19:38.098 0 01:19:38.098 11:16:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 96933 01:19:38.098 11:16:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96933 ']' 01:19:38.098 11:16:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96933 01:19:38.098 11:16:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 01:19:38.098 11:16:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:19:38.098 11:16:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96933 01:19:38.098 killing process with pid 96933 01:19:38.098 Received shutdown signal, test time was about 10.000000 seconds 01:19:38.098 01:19:38.098 Latency(us) 01:19:38.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:19:38.098 =================================================================================================================== 01:19:38.098 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:19:38.098 11:16:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:19:38.098 11:16:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:19:38.098 11:16:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96933' 01:19:38.098 11:16:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96933 01:19:38.098 11:16:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96933 01:19:38.098 11:16:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=97181 01:19:38.098 11:16:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 01:19:38.098 11:16:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 97181 /var/tmp/bdevperf.sock 01:19:38.098 11:16:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 97181 ']' 01:19:38.098 11:16:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:19:38.098 11:16:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 01:19:38.098 11:16:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:19:38.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:19:38.098 11:16:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 01:19:38.098 11:16:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:19:38.098 [2024-07-22 11:16:42.890006] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:19:38.098 [2024-07-22 11:16:42.890087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97181 ] 01:19:38.098 [2024-07-22 11:16:43.032812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:19:38.098 [2024-07-22 11:16:43.083200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:19:38.098 [2024-07-22 11:16:43.126240] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:19:38.671 11:16:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:19:38.671 11:16:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 01:19:38.671 11:16:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=97190 01:19:38.671 11:16:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97181 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 01:19:38.671 11:16:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 01:19:38.928 11:16:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 01:19:39.184 NVMe0n1 01:19:39.184 11:16:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=97233 01:19:39.184 11:16:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:19:39.184 11:16:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 01:19:39.184 Running I/O for 10 seconds... 01:19:40.123 11:16:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:19:40.385 [2024-07-22 11:16:45.402752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403140] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403157] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403299] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403404] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e00d0 is same with the state(5) to be set 01:19:40.385 [2024-07-22 11:16:45.403619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:118064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.385 [2024-07-22 11:16:45.403650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.385 [2024-07-22 11:16:45.403669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:106456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.385 [2024-07-22 11:16:45.403679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.385 [2024-07-22 11:16:45.403689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.385 [2024-07-22 11:16:45.403698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.385 [2024-07-22 11:16:45.403708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.385 [2024-07-22 11:16:45.403717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.385 [2024-07-22 11:16:45.403728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:41936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.385 [2024-07-22 11:16:45.403736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.385 [2024-07-22 11:16:45.403746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:89688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.385 [2024-07-22 11:16:45.403755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.385 [2024-07-22 11:16:45.403765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.385 [2024-07-22 11:16:45.403773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.385 [2024-07-22 11:16:45.403784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:46112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.385 [2024-07-22 11:16:45.403793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.385 [2024-07-22 11:16:45.403803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:36608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.385 [2024-07-22 11:16:45.403811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.385 [2024-07-22 11:16:45.403822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:33288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.385 [2024-07-22 11:16:45.403830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.385 [2024-07-22 11:16:45.403840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.385 [2024-07-22 11:16:45.403861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.385 [2024-07-22 11:16:45.403871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.385 [2024-07-22 11:16:45.403880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.385 [2024-07-22 11:16:45.403891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:33296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.403900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.403910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:58272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.403919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.403929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:67192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.403937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.403947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.403956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.403966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:41936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.403975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.403986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:37696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.403994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:122528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:51272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:28544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:36936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:56800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:121160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:89608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:86920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:129248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:108368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:88704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:57688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:67656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:108640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:71472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:117264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:92976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.386 [2024-07-22 11:16:45.404675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.386 [2024-07-22 11:16:45.404684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.404694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:26760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.404702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.404712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.404720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.404730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:50560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.404739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.404749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:31440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.404758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.404768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.404776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.404786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.404795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.404805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.404813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.404824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.404832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.404842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:115648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.404857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.404867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.404882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.404893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.404901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.404911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.404920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.404930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.404938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.404948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:47472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.404957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.404967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.404976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.404987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.404995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.405005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:65592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.405014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.405024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.405033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.405043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.405053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.405063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:104168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.405071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.405082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:92464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.405091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.405101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.405110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.405120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.405128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.405138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:50008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.405147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.405157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:46976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.405166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.405176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:53576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.405186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.405195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:130200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.405204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.405214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:57208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.405222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.405232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:89360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.405241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.405250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:90416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.405259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.405269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:126520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.405278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.405288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.405296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.405306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:64336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.405314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.405324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.405333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.405343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:41992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.405351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.405361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:125592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.405369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.405380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.405388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.405398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.405407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.405417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.405425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.405435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.405443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.405453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:54576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.405462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.405471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.387 [2024-07-22 11:16:45.405481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.387 [2024-07-22 11:16:45.405503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.388 [2024-07-22 11:16:45.405512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.388 [2024-07-22 11:16:45.405523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.388 [2024-07-22 11:16:45.405532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.388 [2024-07-22 11:16:45.405542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:86336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.388 [2024-07-22 11:16:45.405551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.388 [2024-07-22 11:16:45.405561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.388 [2024-07-22 11:16:45.405570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.388 [2024-07-22 11:16:45.405579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.388 [2024-07-22 11:16:45.405588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.388 [2024-07-22 11:16:45.405598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:90768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.388 [2024-07-22 11:16:45.405607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.388 [2024-07-22 11:16:45.405617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:50896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.388 [2024-07-22 11:16:45.405625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.388 [2024-07-22 11:16:45.405636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.388 [2024-07-22 11:16:45.405644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.388 [2024-07-22 11:16:45.405654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.388 [2024-07-22 11:16:45.405663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.388 [2024-07-22 11:16:45.405673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.388 [2024-07-22 11:16:45.405681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.388 [2024-07-22 11:16:45.405692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.388 [2024-07-22 11:16:45.405701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.388 [2024-07-22 11:16:45.405712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.388 [2024-07-22 11:16:45.405720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.388 [2024-07-22 11:16:45.405730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:129544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.388 [2024-07-22 11:16:45.405739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.388 [2024-07-22 11:16:45.405749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.388 [2024-07-22 11:16:45.405758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.388 [2024-07-22 11:16:45.405768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.388 [2024-07-22 11:16:45.405777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.388 [2024-07-22 11:16:45.405787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:119544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.388 [2024-07-22 11:16:45.405798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.388 [2024-07-22 11:16:45.405808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.388 [2024-07-22 11:16:45.405816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.388 [2024-07-22 11:16:45.405827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:73048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.388 [2024-07-22 11:16:45.405836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.388 [2024-07-22 11:16:45.405846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:111512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.388 [2024-07-22 11:16:45.405861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.388 [2024-07-22 11:16:45.405878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.388 [2024-07-22 11:16:45.405887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.388 [2024-07-22 11:16:45.405898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.388 [2024-07-22 11:16:45.405906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.388 [2024-07-22 11:16:45.405916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.388 [2024-07-22 11:16:45.405927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.388 [2024-07-22 11:16:45.405939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:118720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.388 [2024-07-22 11:16:45.405948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.388 [2024-07-22 11:16:45.405958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.388 [2024-07-22 11:16:45.405966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.388 [2024-07-22 11:16:45.405976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.388 [2024-07-22 11:16:45.405985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.388 [2024-07-22 11:16:45.405995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:36056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.388 [2024-07-22 11:16:45.406003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.388 [2024-07-22 11:16:45.406014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.388 [2024-07-22 11:16:45.406023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.388 [2024-07-22 11:16:45.406032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:117864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.388 [2024-07-22 11:16:45.406041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.388 [2024-07-22 11:16:45.406051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.388 [2024-07-22 11:16:45.406059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.388 [2024-07-22 11:16:45.406069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:40.388 [2024-07-22 11:16:45.406078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.388 [2024-07-22 11:16:45.406087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b132f0 is same with the state(5) to be set 01:19:40.388 [2024-07-22 11:16:45.406098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:40.388 [2024-07-22 11:16:45.406105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:40.388 [2024-07-22 11:16:45.406113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25176 len:8 PRP1 0x0 PRP2 0x0 01:19:40.388 [2024-07-22 11:16:45.406122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:40.388 [2024-07-22 11:16:45.406169] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b132f0 was disconnected and freed. reset controller. 01:19:40.388 [2024-07-22 11:16:45.406418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:19:40.388 [2024-07-22 11:16:45.406911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af2e60 (9): Bad file descriptor 01:19:40.388 [2024-07-22 11:16:45.407093] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:19:40.388 [2024-07-22 11:16:45.407195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af2e60 with addr=10.0.0.2, port=4420 01:19:40.388 [2024-07-22 11:16:45.407287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2e60 is same with the state(5) to be set 01:19:40.388 [2024-07-22 11:16:45.407443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af2e60 (9): Bad file descriptor 01:19:40.388 [2024-07-22 11:16:45.407497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:19:40.388 [2024-07-22 11:16:45.407506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:19:40.388 [2024-07-22 11:16:45.407516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:19:40.388 [2024-07-22 11:16:45.407534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:19:40.388 [2024-07-22 11:16:45.407546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:19:40.388 11:16:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 97233 01:19:42.295 [2024-07-22 11:16:47.404549] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:19:42.295 [2024-07-22 11:16:47.404642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af2e60 with addr=10.0.0.2, port=4420 01:19:42.295 [2024-07-22 11:16:47.404661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2e60 is same with the state(5) to be set 01:19:42.295 [2024-07-22 11:16:47.404693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af2e60 (9): Bad file descriptor 01:19:42.295 [2024-07-22 11:16:47.404712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:19:42.295 [2024-07-22 11:16:47.404722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:19:42.295 [2024-07-22 11:16:47.404734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:19:42.295 [2024-07-22 11:16:47.404764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:19:42.295 [2024-07-22 11:16:47.404774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:19:44.203 [2024-07-22 11:16:49.401786] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:19:44.203 [2024-07-22 11:16:49.401911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af2e60 with addr=10.0.0.2, port=4420 01:19:44.203 [2024-07-22 11:16:49.401930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2e60 is same with the state(5) to be set 01:19:44.203 [2024-07-22 11:16:49.401967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af2e60 (9): Bad file descriptor 01:19:44.203 [2024-07-22 11:16:49.401987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:19:44.203 [2024-07-22 11:16:49.401998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:19:44.203 [2024-07-22 11:16:49.402013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:19:44.203 [2024-07-22 11:16:49.402043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:19:44.203 [2024-07-22 11:16:49.402054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:19:46.730 [2024-07-22 11:16:51.398913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:19:46.730 [2024-07-22 11:16:51.398996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:19:46.730 [2024-07-22 11:16:51.399008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:19:46.730 [2024-07-22 11:16:51.399019] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 01:19:46.730 [2024-07-22 11:16:51.399048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:19:47.294 01:19:47.294 Latency(us) 01:19:47.294 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:19:47.294 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 01:19:47.294 NVMe0n1 : 8.11 2305.67 9.01 15.79 0.00 55116.40 1197.55 7061253.96 01:19:47.294 =================================================================================================================== 01:19:47.294 Total : 2305.67 9.01 15.79 0.00 55116.40 1197.55 7061253.96 01:19:47.294 0 01:19:47.294 11:16:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:19:47.294 Attaching 5 probes... 01:19:47.294 1086.438839: reset bdev controller NVMe0 01:19:47.294 1087.055545: reconnect bdev controller NVMe0 01:19:47.294 3084.377636: reconnect delay bdev controller NVMe0 01:19:47.294 3084.408104: reconnect bdev controller NVMe0 01:19:47.294 5081.618415: reconnect delay bdev controller NVMe0 01:19:47.294 5081.649412: reconnect bdev controller NVMe0 01:19:47.294 7078.895014: reconnect delay bdev controller NVMe0 01:19:47.294 7078.933041: reconnect bdev controller NVMe0 01:19:47.294 11:16:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 01:19:47.294 11:16:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 01:19:47.294 11:16:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 97190 01:19:47.294 11:16:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:19:47.294 11:16:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 97181 01:19:47.294 11:16:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 97181 ']' 01:19:47.294 11:16:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 97181 01:19:47.294 11:16:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 01:19:47.294 11:16:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:19:47.294 11:16:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97181 01:19:47.294 killing process with pid 97181 01:19:47.294 Received shutdown signal, test time was about 8.203571 seconds 01:19:47.294 01:19:47.294 Latency(us) 01:19:47.294 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:19:47.294 =================================================================================================================== 01:19:47.294 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:19:47.294 11:16:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:19:47.294 11:16:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:19:47.294 11:16:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97181' 01:19:47.294 11:16:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 97181 01:19:47.294 11:16:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 97181 01:19:47.868 11:16:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:19:47.868 11:16:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 01:19:47.868 11:16:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 01:19:47.868 11:16:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 01:19:47.868 11:16:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 01:19:47.868 11:16:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:19:47.868 11:16:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 01:19:47.868 11:16:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 01:19:47.868 11:16:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:19:47.868 rmmod nvme_tcp 01:19:48.135 rmmod nvme_fabrics 01:19:48.135 rmmod nvme_keyring 01:19:48.135 11:16:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:19:48.135 11:16:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 01:19:48.135 11:16:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 01:19:48.135 11:16:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 96753 ']' 01:19:48.135 11:16:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 96753 01:19:48.135 11:16:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96753 ']' 01:19:48.135 11:16:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96753 01:19:48.135 11:16:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 01:19:48.135 11:16:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:19:48.135 11:16:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96753 01:19:48.135 killing process with pid 96753 01:19:48.135 11:16:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:19:48.135 11:16:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:19:48.135 11:16:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96753' 01:19:48.135 11:16:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96753 01:19:48.135 11:16:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96753 01:19:48.397 11:16:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:19:48.397 11:16:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:19:48.397 11:16:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:19:48.397 11:16:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:19:48.397 11:16:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 01:19:48.397 11:16:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:19:48.397 11:16:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:19:48.397 11:16:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:19:48.397 11:16:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:19:48.397 01:19:48.397 real 0m46.183s 01:19:48.397 user 2m12.762s 01:19:48.397 sys 0m6.959s 01:19:48.397 11:16:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 01:19:48.397 11:16:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:19:48.397 ************************************ 01:19:48.397 END TEST nvmf_timeout 01:19:48.397 ************************************ 01:19:48.397 11:16:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:19:48.397 11:16:53 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 01:19:48.397 11:16:53 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 01:19:48.397 11:16:53 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 01:19:48.397 11:16:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:19:48.397 11:16:53 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 01:19:48.397 ************************************ 01:19:48.397 END TEST nvmf_tcp 01:19:48.397 ************************************ 01:19:48.397 01:19:48.397 real 14m6.372s 01:19:48.397 user 36m20.097s 01:19:48.397 sys 4m38.705s 01:19:48.397 11:16:53 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 01:19:48.397 11:16:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:19:48.662 11:16:53 -- common/autotest_common.sh@1142 -- # return 0 01:19:48.662 11:16:53 -- spdk/autotest.sh@288 -- # [[ 1 -eq 0 ]] 01:19:48.662 11:16:53 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 01:19:48.662 11:16:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:19:48.662 11:16:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:19:48.662 11:16:53 -- common/autotest_common.sh@10 -- # set +x 01:19:48.662 ************************************ 01:19:48.662 START TEST nvmf_dif 01:19:48.662 ************************************ 01:19:48.662 11:16:53 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 01:19:48.662 * Looking for test storage... 01:19:48.662 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:19:48.662 11:16:53 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:19:48.662 11:16:53 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:19:48.662 11:16:53 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:19:48.662 11:16:53 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:19:48.662 11:16:53 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:48.662 11:16:53 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:48.662 11:16:53 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:48.662 11:16:53 nvmf_dif -- paths/export.sh@5 -- # export PATH 01:19:48.662 11:16:53 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@47 -- # : 0 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 01:19:48.662 11:16:53 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 01:19:48.662 11:16:53 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 01:19:48.662 11:16:53 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 01:19:48.662 11:16:53 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 01:19:48.662 11:16:53 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:19:48.662 11:16:53 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:19:48.662 11:16:53 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:19:48.662 Cannot find device "nvmf_tgt_br" 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@155 -- # true 01:19:48.662 11:16:53 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:19:48.928 Cannot find device "nvmf_tgt_br2" 01:19:48.928 11:16:53 nvmf_dif -- nvmf/common.sh@156 -- # true 01:19:48.928 11:16:53 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:19:48.928 11:16:53 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:19:48.928 Cannot find device "nvmf_tgt_br" 01:19:48.928 11:16:53 nvmf_dif -- nvmf/common.sh@158 -- # true 01:19:48.928 11:16:53 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:19:48.928 Cannot find device "nvmf_tgt_br2" 01:19:48.928 11:16:53 nvmf_dif -- nvmf/common.sh@159 -- # true 01:19:48.928 11:16:53 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:19:48.928 11:16:53 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:19:48.928 11:16:53 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:19:48.928 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:19:48.928 11:16:53 nvmf_dif -- nvmf/common.sh@162 -- # true 01:19:48.928 11:16:53 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:19:48.928 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:19:48.928 11:16:54 nvmf_dif -- nvmf/common.sh@163 -- # true 01:19:48.928 11:16:54 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:19:48.928 11:16:54 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:19:48.928 11:16:54 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:19:48.928 11:16:54 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:19:48.928 11:16:54 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:19:48.928 11:16:54 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:19:48.928 11:16:54 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:19:48.928 11:16:54 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:19:48.928 11:16:54 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:19:48.928 11:16:54 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:19:48.928 11:16:54 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:19:48.928 11:16:54 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:19:48.928 11:16:54 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:19:48.928 11:16:54 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:19:48.928 11:16:54 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:19:48.928 11:16:54 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:19:49.195 11:16:54 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:19:49.195 11:16:54 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:19:49.195 11:16:54 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:19:49.195 11:16:54 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:19:49.195 11:16:54 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:19:49.195 11:16:54 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:19:49.195 11:16:54 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:19:49.195 11:16:54 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:19:49.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:19:49.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 01:19:49.195 01:19:49.195 --- 10.0.0.2 ping statistics --- 01:19:49.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:49.195 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 01:19:49.195 11:16:54 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:19:49.195 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:19:49.195 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 01:19:49.195 01:19:49.195 --- 10.0.0.3 ping statistics --- 01:19:49.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:49.195 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 01:19:49.195 11:16:54 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:19:49.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:19:49.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 01:19:49.195 01:19:49.195 --- 10.0.0.1 ping statistics --- 01:19:49.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:49.195 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 01:19:49.195 11:16:54 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:19:49.195 11:16:54 nvmf_dif -- nvmf/common.sh@433 -- # return 0 01:19:49.195 11:16:54 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 01:19:49.195 11:16:54 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:19:49.783 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:19:49.783 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 01:19:49.783 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 01:19:49.783 11:16:54 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:19:49.783 11:16:54 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:19:49.783 11:16:54 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:19:49.783 11:16:54 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:19:49.783 11:16:54 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:19:49.783 11:16:54 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:19:49.783 11:16:54 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 01:19:49.783 11:16:54 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 01:19:49.783 11:16:54 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:19:49.783 11:16:54 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 01:19:49.783 11:16:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:19:49.783 11:16:54 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=97672 01:19:49.783 11:16:54 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 97672 01:19:49.783 11:16:54 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 97672 ']' 01:19:49.783 11:16:54 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:19:49.783 11:16:54 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 01:19:49.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:19:49.783 11:16:54 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 01:19:49.783 11:16:54 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:19:49.783 11:16:54 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 01:19:49.783 11:16:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:19:49.783 [2024-07-22 11:16:54.965343] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:19:49.783 [2024-07-22 11:16:54.965480] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:19:50.044 [2024-07-22 11:16:55.115208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:19:50.044 [2024-07-22 11:16:55.193200] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:19:50.044 [2024-07-22 11:16:55.193271] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:19:50.044 [2024-07-22 11:16:55.193281] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:19:50.044 [2024-07-22 11:16:55.193290] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:19:50.044 [2024-07-22 11:16:55.193297] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:19:50.044 [2024-07-22 11:16:55.193330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:19:50.303 [2024-07-22 11:16:55.267501] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:19:50.869 11:16:55 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:19:50.869 11:16:55 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 01:19:50.869 11:16:55 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:19:50.869 11:16:55 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 01:19:50.869 11:16:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:19:50.869 11:16:55 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:19:50.869 11:16:55 nvmf_dif -- target/dif.sh@139 -- # create_transport 01:19:50.869 11:16:55 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 01:19:50.869 11:16:55 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:50.869 11:16:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:19:50.869 [2024-07-22 11:16:55.852839] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:19:50.869 11:16:55 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:50.869 11:16:55 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 01:19:50.869 11:16:55 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:19:50.869 11:16:55 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 01:19:50.869 11:16:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:19:50.869 ************************************ 01:19:50.869 START TEST fio_dif_1_default 01:19:50.869 ************************************ 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:19:50.869 bdev_null0 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:19:50.869 [2024-07-22 11:16:55.920902] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:19:50.869 { 01:19:50.869 "params": { 01:19:50.869 "name": "Nvme$subsystem", 01:19:50.869 "trtype": "$TEST_TRANSPORT", 01:19:50.869 "traddr": "$NVMF_FIRST_TARGET_IP", 01:19:50.869 "adrfam": "ipv4", 01:19:50.869 "trsvcid": "$NVMF_PORT", 01:19:50.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:19:50.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:19:50.869 "hdgst": ${hdgst:-false}, 01:19:50.869 "ddgst": ${ddgst:-false} 01:19:50.869 }, 01:19:50.869 "method": "bdev_nvme_attach_controller" 01:19:50.869 } 01:19:50.869 EOF 01:19:50.869 )") 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:19:50.869 "params": { 01:19:50.869 "name": "Nvme0", 01:19:50.869 "trtype": "tcp", 01:19:50.869 "traddr": "10.0.0.2", 01:19:50.869 "adrfam": "ipv4", 01:19:50.869 "trsvcid": "4420", 01:19:50.869 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:19:50.869 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:19:50.869 "hdgst": false, 01:19:50.869 "ddgst": false 01:19:50.869 }, 01:19:50.869 "method": "bdev_nvme_attach_controller" 01:19:50.869 }' 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 01:19:50.869 11:16:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:19:50.869 11:16:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 01:19:50.869 11:16:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:19:50.869 11:16:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:19:50.869 11:16:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:19:51.127 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 01:19:51.127 fio-3.35 01:19:51.127 Starting 1 thread 01:20:03.332 01:20:03.332 filename0: (groupid=0, jobs=1): err= 0: pid=97739: Mon Jul 22 11:17:06 2024 01:20:03.332 read: IOPS=11.7k, BW=45.8MiB/s (48.0MB/s)(458MiB/10001msec) 01:20:03.332 slat (usec): min=5, max=640, avg= 6.24, stdev= 3.46 01:20:03.332 clat (usec): min=282, max=3898, avg=324.12, stdev=43.10 01:20:03.332 lat (usec): min=287, max=3912, avg=330.37, stdev=43.85 01:20:03.332 clat percentiles (usec): 01:20:03.332 | 1.00th=[ 293], 5.00th=[ 297], 10.00th=[ 302], 20.00th=[ 310], 01:20:03.332 | 30.00th=[ 314], 40.00th=[ 314], 50.00th=[ 318], 60.00th=[ 322], 01:20:03.332 | 70.00th=[ 330], 80.00th=[ 334], 90.00th=[ 347], 95.00th=[ 359], 01:20:03.332 | 99.00th=[ 404], 99.50th=[ 433], 99.90th=[ 783], 99.95th=[ 1045], 01:20:03.332 | 99.99th=[ 2343] 01:20:03.332 bw ( KiB/s): min=43840, max=49088, per=100.00%, avg=46967.58, stdev=1356.97, samples=19 01:20:03.332 iops : min=10960, max=12272, avg=11741.89, stdev=339.24, samples=19 01:20:03.332 lat (usec) : 500=99.80%, 750=0.10%, 1000=0.03% 01:20:03.332 lat (msec) : 2=0.06%, 4=0.01% 01:20:03.332 cpu : usr=81.35%, sys=17.15%, ctx=32, majf=0, minf=0 01:20:03.332 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:20:03.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:03.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:03.332 issued rwts: total=117200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:03.332 latency : target=0, window=0, percentile=100.00%, depth=4 01:20:03.332 01:20:03.332 Run status group 0 (all jobs): 01:20:03.332 READ: bw=45.8MiB/s (48.0MB/s), 45.8MiB/s-45.8MiB/s (48.0MB/s-48.0MB/s), io=458MiB (480MB), run=10001-10001msec 01:20:03.332 11:17:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 01:20:03.332 11:17:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 01:20:03.332 11:17:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 01:20:03.332 11:17:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 01:20:03.332 11:17:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 01:20:03.332 11:17:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:20:03.332 11:17:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:03.332 11:17:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:20:03.332 11:17:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:03.332 11:17:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:20:03.332 11:17:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:03.332 11:17:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:20:03.332 ************************************ 01:20:03.332 END TEST fio_dif_1_default 01:20:03.332 ************************************ 01:20:03.332 11:17:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:03.332 01:20:03.332 real 0m11.016s 01:20:03.332 user 0m8.720s 01:20:03.332 sys 0m2.100s 01:20:03.332 11:17:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 01:20:03.332 11:17:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:20:03.332 11:17:06 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 01:20:03.332 11:17:06 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 01:20:03.332 11:17:06 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:20:03.332 11:17:06 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 01:20:03.332 11:17:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:20:03.332 ************************************ 01:20:03.332 START TEST fio_dif_1_multi_subsystems 01:20:03.332 ************************************ 01:20:03.332 11:17:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 01:20:03.332 11:17:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 01:20:03.332 11:17:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 01:20:03.332 11:17:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 01:20:03.332 11:17:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 01:20:03.332 11:17:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 01:20:03.332 11:17:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 01:20:03.332 11:17:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 01:20:03.332 11:17:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:03.332 11:17:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:20:03.333 bdev_null0 01:20:03.333 11:17:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:03.333 11:17:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:20:03.333 11:17:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:03.333 11:17:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:20:03.333 11:17:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:03.333 11:17:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:20:03.333 11:17:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:03.333 11:17:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:20:03.333 11:17:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:03.333 11:17:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:20:03.333 11:17:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:03.333 11:17:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:20:03.333 [2024-07-22 11:17:07.006833] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:20:03.333 bdev_null1 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:20:03.333 { 01:20:03.333 "params": { 01:20:03.333 "name": "Nvme$subsystem", 01:20:03.333 "trtype": "$TEST_TRANSPORT", 01:20:03.333 "traddr": "$NVMF_FIRST_TARGET_IP", 01:20:03.333 "adrfam": "ipv4", 01:20:03.333 "trsvcid": "$NVMF_PORT", 01:20:03.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:20:03.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:20:03.333 "hdgst": ${hdgst:-false}, 01:20:03.333 "ddgst": ${ddgst:-false} 01:20:03.333 }, 01:20:03.333 "method": "bdev_nvme_attach_controller" 01:20:03.333 } 01:20:03.333 EOF 01:20:03.333 )") 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:20:03.333 { 01:20:03.333 "params": { 01:20:03.333 "name": "Nvme$subsystem", 01:20:03.333 "trtype": "$TEST_TRANSPORT", 01:20:03.333 "traddr": "$NVMF_FIRST_TARGET_IP", 01:20:03.333 "adrfam": "ipv4", 01:20:03.333 "trsvcid": "$NVMF_PORT", 01:20:03.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:20:03.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:20:03.333 "hdgst": ${hdgst:-false}, 01:20:03.333 "ddgst": ${ddgst:-false} 01:20:03.333 }, 01:20:03.333 "method": "bdev_nvme_attach_controller" 01:20:03.333 } 01:20:03.333 EOF 01:20:03.333 )") 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:20:03.333 "params": { 01:20:03.333 "name": "Nvme0", 01:20:03.333 "trtype": "tcp", 01:20:03.333 "traddr": "10.0.0.2", 01:20:03.333 "adrfam": "ipv4", 01:20:03.333 "trsvcid": "4420", 01:20:03.333 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:20:03.333 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:20:03.333 "hdgst": false, 01:20:03.333 "ddgst": false 01:20:03.333 }, 01:20:03.333 "method": "bdev_nvme_attach_controller" 01:20:03.333 },{ 01:20:03.333 "params": { 01:20:03.333 "name": "Nvme1", 01:20:03.333 "trtype": "tcp", 01:20:03.333 "traddr": "10.0.0.2", 01:20:03.333 "adrfam": "ipv4", 01:20:03.333 "trsvcid": "4420", 01:20:03.333 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:20:03.333 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:20:03.333 "hdgst": false, 01:20:03.333 "ddgst": false 01:20:03.333 }, 01:20:03.333 "method": "bdev_nvme_attach_controller" 01:20:03.333 }' 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:20:03.333 11:17:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:20:03.333 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 01:20:03.333 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 01:20:03.333 fio-3.35 01:20:03.333 Starting 2 threads 01:20:13.305 01:20:13.305 filename0: (groupid=0, jobs=1): err= 0: pid=97902: Mon Jul 22 11:17:17 2024 01:20:13.305 read: IOPS=5858, BW=22.9MiB/s (24.0MB/s)(229MiB/10001msec) 01:20:13.305 slat (nsec): min=5783, max=73180, avg=12283.81, stdev=5133.07 01:20:13.305 clat (usec): min=133, max=1304, avg=648.01, stdev=44.00 01:20:13.305 lat (usec): min=150, max=1341, avg=660.29, stdev=44.68 01:20:13.305 clat percentiles (usec): 01:20:13.305 | 1.00th=[ 570], 5.00th=[ 586], 10.00th=[ 594], 20.00th=[ 611], 01:20:13.305 | 30.00th=[ 619], 40.00th=[ 635], 50.00th=[ 644], 60.00th=[ 652], 01:20:13.305 | 70.00th=[ 668], 80.00th=[ 685], 90.00th=[ 701], 95.00th=[ 725], 01:20:13.305 | 99.00th=[ 766], 99.50th=[ 791], 99.90th=[ 898], 99.95th=[ 988], 01:20:13.305 | 99.99th=[ 1221] 01:20:13.305 bw ( KiB/s): min=22240, max=24384, per=50.12%, avg=23494.74, stdev=717.97, samples=19 01:20:13.305 iops : min= 5560, max= 6096, avg=5873.68, stdev=179.49, samples=19 01:20:13.305 lat (usec) : 250=0.01%, 500=0.01%, 750=98.21%, 1000=1.73% 01:20:13.305 lat (msec) : 2=0.05% 01:20:13.305 cpu : usr=90.10%, sys=8.84%, ctx=9, majf=0, minf=0 01:20:13.305 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:20:13.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:13.305 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:13.305 issued rwts: total=58589,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:13.305 latency : target=0, window=0, percentile=100.00%, depth=4 01:20:13.305 filename1: (groupid=0, jobs=1): err= 0: pid=97903: Mon Jul 22 11:17:17 2024 01:20:13.305 read: IOPS=5860, BW=22.9MiB/s (24.0MB/s)(229MiB/10001msec) 01:20:13.305 slat (usec): min=5, max=100, avg=13.72, stdev= 8.47 01:20:13.305 clat (usec): min=342, max=1303, avg=641.55, stdev=40.26 01:20:13.305 lat (usec): min=349, max=1343, avg=655.27, stdev=42.98 01:20:13.305 clat percentiles (usec): 01:20:13.305 | 1.00th=[ 562], 5.00th=[ 586], 10.00th=[ 594], 20.00th=[ 611], 01:20:13.305 | 30.00th=[ 619], 40.00th=[ 627], 50.00th=[ 635], 60.00th=[ 652], 01:20:13.305 | 70.00th=[ 660], 80.00th=[ 668], 90.00th=[ 693], 95.00th=[ 709], 01:20:13.305 | 99.00th=[ 750], 99.50th=[ 775], 99.90th=[ 881], 99.95th=[ 938], 01:20:13.305 | 99.99th=[ 1237] 01:20:13.305 bw ( KiB/s): min=22240, max=24384, per=50.14%, avg=23502.26, stdev=721.20, samples=19 01:20:13.305 iops : min= 5560, max= 6096, avg=5875.53, stdev=180.25, samples=19 01:20:13.305 lat (usec) : 500=0.04%, 750=99.01%, 1000=0.90% 01:20:13.305 lat (msec) : 2=0.05% 01:20:13.305 cpu : usr=91.11%, sys=7.79%, ctx=8, majf=0, minf=0 01:20:13.305 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:20:13.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:13.305 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:13.305 issued rwts: total=58612,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:13.305 latency : target=0, window=0, percentile=100.00%, depth=4 01:20:13.305 01:20:13.305 Run status group 0 (all jobs): 01:20:13.305 READ: bw=45.8MiB/s (48.0MB/s), 22.9MiB/s-22.9MiB/s (24.0MB/s-24.0MB/s), io=458MiB (480MB), run=10001-10001msec 01:20:13.305 11:17:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 01:20:13.305 11:17:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 01:20:13.305 11:17:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 01:20:13.305 11:17:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 01:20:13.305 11:17:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 01:20:13.305 11:17:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:20:13.305 11:17:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:13.305 11:17:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:20:13.305 11:17:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:13.305 11:17:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:20:13.305 11:17:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:13.305 11:17:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:20:13.305 11:17:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:13.305 11:17:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 01:20:13.305 11:17:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 01:20:13.305 11:17:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 01:20:13.305 11:17:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:20:13.305 11:17:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:13.305 11:17:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:20:13.305 11:17:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:13.305 11:17:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 01:20:13.305 11:17:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:13.305 11:17:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:20:13.305 ************************************ 01:20:13.305 END TEST fio_dif_1_multi_subsystems 01:20:13.305 ************************************ 01:20:13.305 11:17:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:13.305 01:20:13.305 real 0m11.106s 01:20:13.305 user 0m18.814s 01:20:13.305 sys 0m2.019s 01:20:13.305 11:17:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 01:20:13.305 11:17:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:20:13.305 11:17:18 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 01:20:13.305 11:17:18 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 01:20:13.305 11:17:18 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:20:13.305 11:17:18 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 01:20:13.305 11:17:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:20:13.305 ************************************ 01:20:13.305 START TEST fio_dif_rand_params 01:20:13.305 ************************************ 01:20:13.305 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 01:20:13.305 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 01:20:13.305 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 01:20:13.305 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 01:20:13.305 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 01:20:13.305 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 01:20:13.305 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 01:20:13.305 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 01:20:13.305 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 01:20:13.305 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 01:20:13.305 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:20:13.305 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 01:20:13.305 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 01:20:13.305 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 01:20:13.305 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:13.305 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:13.305 bdev_null0 01:20:13.305 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:13.305 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:20:13.305 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:13.305 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:13.305 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:13.305 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:20:13.305 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:13.305 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:13.305 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:13.305 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:20:13.305 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:13.305 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:13.305 [2024-07-22 11:17:18.183229] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:20:13.305 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:13.305 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 01:20:13.305 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 01:20:13.305 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:20:13.306 { 01:20:13.306 "params": { 01:20:13.306 "name": "Nvme$subsystem", 01:20:13.306 "trtype": "$TEST_TRANSPORT", 01:20:13.306 "traddr": "$NVMF_FIRST_TARGET_IP", 01:20:13.306 "adrfam": "ipv4", 01:20:13.306 "trsvcid": "$NVMF_PORT", 01:20:13.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:20:13.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:20:13.306 "hdgst": ${hdgst:-false}, 01:20:13.306 "ddgst": ${ddgst:-false} 01:20:13.306 }, 01:20:13.306 "method": "bdev_nvme_attach_controller" 01:20:13.306 } 01:20:13.306 EOF 01:20:13.306 )") 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:20:13.306 "params": { 01:20:13.306 "name": "Nvme0", 01:20:13.306 "trtype": "tcp", 01:20:13.306 "traddr": "10.0.0.2", 01:20:13.306 "adrfam": "ipv4", 01:20:13.306 "trsvcid": "4420", 01:20:13.306 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:20:13.306 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:20:13.306 "hdgst": false, 01:20:13.306 "ddgst": false 01:20:13.306 }, 01:20:13.306 "method": "bdev_nvme_attach_controller" 01:20:13.306 }' 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:20:13.306 11:17:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:20:13.306 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 01:20:13.306 ... 01:20:13.306 fio-3.35 01:20:13.306 Starting 3 threads 01:20:19.882 01:20:19.882 filename0: (groupid=0, jobs=1): err= 0: pid=98060: Mon Jul 22 11:17:23 2024 01:20:19.882 read: IOPS=312, BW=39.1MiB/s (41.0MB/s)(196MiB/5005msec) 01:20:19.882 slat (nsec): min=5883, max=69501, avg=17983.70, stdev=8205.01 01:20:19.882 clat (usec): min=6185, max=61616, avg=9545.31, stdev=3477.27 01:20:19.882 lat (usec): min=6198, max=61686, avg=9563.29, stdev=3477.74 01:20:19.882 clat percentiles (usec): 01:20:19.882 | 1.00th=[ 8848], 5.00th=[ 8979], 10.00th=[ 8979], 20.00th=[ 9110], 01:20:19.882 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9241], 60.00th=[ 9241], 01:20:19.882 | 70.00th=[ 9372], 80.00th=[ 9503], 90.00th=[ 9634], 95.00th=[ 9896], 01:20:19.882 | 99.00th=[10290], 99.50th=[51119], 99.90th=[61604], 99.95th=[61604], 01:20:19.882 | 99.99th=[61604] 01:20:19.882 bw ( KiB/s): min=29184, max=42240, per=33.11%, avg=39756.22, stdev=4021.06, samples=9 01:20:19.882 iops : min= 228, max= 330, avg=310.56, stdev=31.41, samples=9 01:20:19.882 lat (msec) : 10=97.45%, 20=1.98%, 100=0.57% 01:20:19.882 cpu : usr=90.19%, sys=9.31%, ctx=7, majf=0, minf=0 01:20:19.882 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:20:19.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:19.882 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:19.882 issued rwts: total=1566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:19.882 latency : target=0, window=0, percentile=100.00%, depth=3 01:20:19.882 filename0: (groupid=0, jobs=1): err= 0: pid=98061: Mon Jul 22 11:17:23 2024 01:20:19.882 read: IOPS=312, BW=39.1MiB/s (41.0MB/s)(195MiB/5002msec) 01:20:19.882 slat (nsec): min=5812, max=64965, avg=23131.06, stdev=14443.48 01:20:19.882 clat (usec): min=8874, max=56797, avg=9543.34, stdev=3403.42 01:20:19.882 lat (usec): min=8879, max=56825, avg=9566.47, stdev=3403.49 01:20:19.882 clat percentiles (usec): 01:20:19.882 | 1.00th=[ 8848], 5.00th=[ 8979], 10.00th=[ 8979], 20.00th=[ 9110], 01:20:19.882 | 30.00th=[ 9110], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9241], 01:20:19.882 | 70.00th=[ 9372], 80.00th=[ 9503], 90.00th=[ 9634], 95.00th=[ 9896], 01:20:19.882 | 99.00th=[10290], 99.50th=[51119], 99.90th=[56886], 99.95th=[56886], 01:20:19.882 | 99.99th=[56886] 01:20:19.882 bw ( KiB/s): min=28416, max=42240, per=33.10%, avg=39747.22, stdev=4302.49, samples=9 01:20:19.882 iops : min= 222, max= 330, avg=310.44, stdev=33.60, samples=9 01:20:19.882 lat (msec) : 10=97.70%, 20=1.54%, 50=0.19%, 100=0.58% 01:20:19.882 cpu : usr=92.56%, sys=6.94%, ctx=29, majf=0, minf=0 01:20:19.882 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:20:19.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:19.882 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:19.882 issued rwts: total=1563,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:19.882 latency : target=0, window=0, percentile=100.00%, depth=3 01:20:19.882 filename0: (groupid=0, jobs=1): err= 0: pid=98062: Mon Jul 22 11:17:23 2024 01:20:19.882 read: IOPS=312, BW=39.1MiB/s (41.0MB/s)(196MiB/5004msec) 01:20:19.882 slat (usec): min=6, max=105, avg=23.85, stdev=15.06 01:20:19.882 clat (usec): min=6185, max=67345, avg=9526.24, stdev=3636.97 01:20:19.882 lat (usec): min=6198, max=67382, avg=9550.09, stdev=3636.69 01:20:19.882 clat percentiles (usec): 01:20:19.882 | 1.00th=[ 8848], 5.00th=[ 8979], 10.00th=[ 8979], 20.00th=[ 9110], 01:20:19.882 | 30.00th=[ 9110], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9241], 01:20:19.882 | 70.00th=[ 9372], 80.00th=[ 9503], 90.00th=[ 9634], 95.00th=[ 9765], 01:20:19.882 | 99.00th=[10290], 99.50th=[51119], 99.90th=[67634], 99.95th=[67634], 01:20:19.882 | 99.99th=[67634] 01:20:19.882 bw ( KiB/s): min=29242, max=42240, per=33.12%, avg=39762.67, stdev=4002.00, samples=9 01:20:19.882 iops : min= 228, max= 330, avg=310.56, stdev=31.41, samples=9 01:20:19.882 lat (msec) : 10=97.70%, 20=1.72%, 100=0.57% 01:20:19.882 cpu : usr=92.44%, sys=7.04%, ctx=6, majf=0, minf=9 01:20:19.882 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:20:19.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:19.882 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:19.882 issued rwts: total=1566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:19.882 latency : target=0, window=0, percentile=100.00%, depth=3 01:20:19.882 01:20:19.882 Run status group 0 (all jobs): 01:20:19.882 READ: bw=117MiB/s (123MB/s), 39.1MiB/s-39.1MiB/s (41.0MB/s-41.0MB/s), io=587MiB (615MB), run=5002-5005msec 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:19.882 bdev_null0 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:19.882 [2024-07-22 11:17:24.154979] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:19.882 bdev_null1 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:19.882 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:19.883 bdev_null2 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:20:19.883 { 01:20:19.883 "params": { 01:20:19.883 "name": "Nvme$subsystem", 01:20:19.883 "trtype": "$TEST_TRANSPORT", 01:20:19.883 "traddr": "$NVMF_FIRST_TARGET_IP", 01:20:19.883 "adrfam": "ipv4", 01:20:19.883 "trsvcid": "$NVMF_PORT", 01:20:19.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:20:19.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:20:19.883 "hdgst": ${hdgst:-false}, 01:20:19.883 "ddgst": ${ddgst:-false} 01:20:19.883 }, 01:20:19.883 "method": "bdev_nvme_attach_controller" 01:20:19.883 } 01:20:19.883 EOF 01:20:19.883 )") 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:20:19.883 { 01:20:19.883 "params": { 01:20:19.883 "name": "Nvme$subsystem", 01:20:19.883 "trtype": "$TEST_TRANSPORT", 01:20:19.883 "traddr": "$NVMF_FIRST_TARGET_IP", 01:20:19.883 "adrfam": "ipv4", 01:20:19.883 "trsvcid": "$NVMF_PORT", 01:20:19.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:20:19.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:20:19.883 "hdgst": ${hdgst:-false}, 01:20:19.883 "ddgst": ${ddgst:-false} 01:20:19.883 }, 01:20:19.883 "method": "bdev_nvme_attach_controller" 01:20:19.883 } 01:20:19.883 EOF 01:20:19.883 )") 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:20:19.883 { 01:20:19.883 "params": { 01:20:19.883 "name": "Nvme$subsystem", 01:20:19.883 "trtype": "$TEST_TRANSPORT", 01:20:19.883 "traddr": "$NVMF_FIRST_TARGET_IP", 01:20:19.883 "adrfam": "ipv4", 01:20:19.883 "trsvcid": "$NVMF_PORT", 01:20:19.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:20:19.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:20:19.883 "hdgst": ${hdgst:-false}, 01:20:19.883 "ddgst": ${ddgst:-false} 01:20:19.883 }, 01:20:19.883 "method": "bdev_nvme_attach_controller" 01:20:19.883 } 01:20:19.883 EOF 01:20:19.883 )") 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 01:20:19.883 11:17:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:20:19.883 "params": { 01:20:19.883 "name": "Nvme0", 01:20:19.883 "trtype": "tcp", 01:20:19.883 "traddr": "10.0.0.2", 01:20:19.883 "adrfam": "ipv4", 01:20:19.883 "trsvcid": "4420", 01:20:19.883 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:20:19.883 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:20:19.883 "hdgst": false, 01:20:19.883 "ddgst": false 01:20:19.883 }, 01:20:19.883 "method": "bdev_nvme_attach_controller" 01:20:19.883 },{ 01:20:19.883 "params": { 01:20:19.883 "name": "Nvme1", 01:20:19.883 "trtype": "tcp", 01:20:19.883 "traddr": "10.0.0.2", 01:20:19.883 "adrfam": "ipv4", 01:20:19.883 "trsvcid": "4420", 01:20:19.884 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:20:19.884 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:20:19.884 "hdgst": false, 01:20:19.884 "ddgst": false 01:20:19.884 }, 01:20:19.884 "method": "bdev_nvme_attach_controller" 01:20:19.884 },{ 01:20:19.884 "params": { 01:20:19.884 "name": "Nvme2", 01:20:19.884 "trtype": "tcp", 01:20:19.884 "traddr": "10.0.0.2", 01:20:19.884 "adrfam": "ipv4", 01:20:19.884 "trsvcid": "4420", 01:20:19.884 "subnqn": "nqn.2016-06.io.spdk:cnode2", 01:20:19.884 "hostnqn": "nqn.2016-06.io.spdk:host2", 01:20:19.884 "hdgst": false, 01:20:19.884 "ddgst": false 01:20:19.884 }, 01:20:19.884 "method": "bdev_nvme_attach_controller" 01:20:19.884 }' 01:20:19.884 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 01:20:19.884 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:20:19.884 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:20:19.884 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 01:20:19.884 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:20:19.884 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:20:19.884 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 01:20:19.884 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:20:19.884 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:20:19.884 11:17:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:20:19.884 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 01:20:19.884 ... 01:20:19.884 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 01:20:19.884 ... 01:20:19.884 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 01:20:19.884 ... 01:20:19.884 fio-3.35 01:20:19.884 Starting 24 threads 01:20:32.100 01:20:32.100 filename0: (groupid=0, jobs=1): err= 0: pid=98157: Mon Jul 22 11:17:35 2024 01:20:32.100 read: IOPS=275, BW=1100KiB/s (1127kB/s)(10.8MiB/10020msec) 01:20:32.101 slat (usec): min=6, max=8042, avg=33.02, stdev=373.59 01:20:32.101 clat (msec): min=22, max=117, avg=58.01, stdev=13.12 01:20:32.101 lat (msec): min=22, max=117, avg=58.04, stdev=13.13 01:20:32.101 clat percentiles (msec): 01:20:32.101 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 48], 01:20:32.101 | 30.00th=[ 51], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 61], 01:20:32.101 | 70.00th=[ 63], 80.00th=[ 69], 90.00th=[ 74], 95.00th=[ 83], 01:20:32.101 | 99.00th=[ 94], 99.50th=[ 96], 99.90th=[ 104], 99.95th=[ 118], 01:20:32.101 | 99.99th=[ 118] 01:20:32.101 bw ( KiB/s): min= 928, max= 1248, per=4.04%, avg=1098.80, stdev=80.31, samples=20 01:20:32.101 iops : min= 232, max= 312, avg=274.70, stdev=20.08, samples=20 01:20:32.101 lat (msec) : 50=28.66%, 100=71.15%, 250=0.18% 01:20:32.101 cpu : usr=32.42%, sys=2.13%, ctx=924, majf=0, minf=9 01:20:32.101 IO depths : 1=0.1%, 2=1.2%, 4=5.0%, 8=77.6%, 16=16.1%, 32=0.0%, >=64=0.0% 01:20:32.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.101 complete : 0=0.0%, 4=88.9%, 8=10.0%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.101 issued rwts: total=2756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:32.101 latency : target=0, window=0, percentile=100.00%, depth=16 01:20:32.101 filename0: (groupid=0, jobs=1): err= 0: pid=98158: Mon Jul 22 11:17:35 2024 01:20:32.101 read: IOPS=286, BW=1147KiB/s (1175kB/s)(11.2MiB/10021msec) 01:20:32.101 slat (usec): min=3, max=8041, avg=43.15, stdev=453.68 01:20:32.101 clat (msec): min=24, max=104, avg=55.60, stdev=12.88 01:20:32.101 lat (msec): min=24, max=104, avg=55.64, stdev=12.87 01:20:32.101 clat percentiles (msec): 01:20:32.101 | 1.00th=[ 28], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 46], 01:20:32.101 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 58], 60.00th=[ 61], 01:20:32.101 | 70.00th=[ 62], 80.00th=[ 67], 90.00th=[ 71], 95.00th=[ 74], 01:20:32.101 | 99.00th=[ 91], 99.50th=[ 95], 99.90th=[ 105], 99.95th=[ 105], 01:20:32.101 | 99.99th=[ 105] 01:20:32.101 bw ( KiB/s): min= 1024, max= 1328, per=4.20%, avg=1143.05, stdev=77.29, samples=20 01:20:32.101 iops : min= 256, max= 332, avg=285.75, stdev=19.32, samples=20 01:20:32.101 lat (msec) : 50=40.19%, 100=59.71%, 250=0.10% 01:20:32.101 cpu : usr=31.38%, sys=1.85%, ctx=855, majf=0, minf=9 01:20:32.101 IO depths : 1=0.1%, 2=0.3%, 4=1.4%, 8=81.8%, 16=16.5%, 32=0.0%, >=64=0.0% 01:20:32.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.101 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.101 issued rwts: total=2874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:32.101 latency : target=0, window=0, percentile=100.00%, depth=16 01:20:32.101 filename0: (groupid=0, jobs=1): err= 0: pid=98159: Mon Jul 22 11:17:35 2024 01:20:32.101 read: IOPS=280, BW=1122KiB/s (1149kB/s)(11.0MiB/10052msec) 01:20:32.101 slat (usec): min=3, max=8020, avg=20.52, stdev=177.06 01:20:32.101 clat (usec): min=1938, max=110151, avg=56841.98, stdev=16163.05 01:20:32.101 lat (usec): min=1944, max=110161, avg=56862.50, stdev=16164.94 01:20:32.101 clat percentiles (msec): 01:20:32.101 | 1.00th=[ 3], 5.00th=[ 32], 10.00th=[ 39], 20.00th=[ 47], 01:20:32.101 | 30.00th=[ 51], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 61], 01:20:32.101 | 70.00th=[ 64], 80.00th=[ 70], 90.00th=[ 72], 95.00th=[ 82], 01:20:32.101 | 99.00th=[ 91], 99.50th=[ 95], 99.90th=[ 97], 99.95th=[ 99], 01:20:32.101 | 99.99th=[ 111] 01:20:32.101 bw ( KiB/s): min= 928, max= 2048, per=4.12%, avg=1121.60, stdev=230.25, samples=20 01:20:32.101 iops : min= 232, max= 512, avg=280.40, stdev=57.56, samples=20 01:20:32.101 lat (msec) : 2=0.35%, 4=1.35%, 10=1.70%, 20=0.57%, 50=25.67% 01:20:32.101 lat (msec) : 100=70.32%, 250=0.04% 01:20:32.101 cpu : usr=34.74%, sys=2.54%, ctx=1143, majf=0, minf=0 01:20:32.101 IO depths : 1=0.2%, 2=1.6%, 4=5.6%, 8=76.5%, 16=16.0%, 32=0.0%, >=64=0.0% 01:20:32.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.101 complete : 0=0.0%, 4=89.3%, 8=9.4%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.101 issued rwts: total=2820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:32.101 latency : target=0, window=0, percentile=100.00%, depth=16 01:20:32.101 filename0: (groupid=0, jobs=1): err= 0: pid=98160: Mon Jul 22 11:17:35 2024 01:20:32.101 read: IOPS=273, BW=1094KiB/s (1120kB/s)(10.7MiB/10029msec) 01:20:32.101 slat (usec): min=5, max=4035, avg=24.42, stdev=180.46 01:20:32.101 clat (msec): min=29, max=128, avg=58.33, stdev=13.19 01:20:32.101 lat (msec): min=29, max=128, avg=58.36, stdev=13.20 01:20:32.101 clat percentiles (msec): 01:20:32.101 | 1.00th=[ 34], 5.00th=[ 39], 10.00th=[ 41], 20.00th=[ 47], 01:20:32.101 | 30.00th=[ 52], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 63], 01:20:32.101 | 70.00th=[ 65], 80.00th=[ 68], 90.00th=[ 73], 95.00th=[ 81], 01:20:32.101 | 99.00th=[ 93], 99.50th=[ 116], 99.90th=[ 120], 99.95th=[ 129], 01:20:32.101 | 99.99th=[ 129] 01:20:32.101 bw ( KiB/s): min= 880, max= 1192, per=4.01%, avg=1090.90, stdev=82.09, samples=20 01:20:32.101 iops : min= 220, max= 298, avg=272.70, stdev=20.54, samples=20 01:20:32.101 lat (msec) : 50=29.13%, 100=70.14%, 250=0.73% 01:20:32.101 cpu : usr=43.83%, sys=3.01%, ctx=1542, majf=0, minf=9 01:20:32.101 IO depths : 1=0.1%, 2=1.9%, 4=7.5%, 8=75.4%, 16=15.2%, 32=0.0%, >=64=0.0% 01:20:32.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.101 complete : 0=0.0%, 4=89.3%, 8=9.0%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.101 issued rwts: total=2743,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:32.101 latency : target=0, window=0, percentile=100.00%, depth=16 01:20:32.101 filename0: (groupid=0, jobs=1): err= 0: pid=98161: Mon Jul 22 11:17:35 2024 01:20:32.101 read: IOPS=279, BW=1117KiB/s (1144kB/s)(10.9MiB/10026msec) 01:20:32.101 slat (usec): min=5, max=8042, avg=27.48, stdev=302.75 01:20:32.101 clat (msec): min=23, max=108, avg=57.15, stdev=12.77 01:20:32.101 lat (msec): min=24, max=108, avg=57.17, stdev=12.77 01:20:32.101 clat percentiles (msec): 01:20:32.101 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 47], 01:20:32.101 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 61], 01:20:32.101 | 70.00th=[ 62], 80.00th=[ 69], 90.00th=[ 72], 95.00th=[ 79], 01:20:32.101 | 99.00th=[ 94], 99.50th=[ 96], 99.90th=[ 109], 99.95th=[ 109], 01:20:32.101 | 99.99th=[ 109] 01:20:32.101 bw ( KiB/s): min= 944, max= 1380, per=4.10%, avg=1114.20, stdev=106.21, samples=20 01:20:32.101 iops : min= 236, max= 345, avg=278.55, stdev=26.55, samples=20 01:20:32.101 lat (msec) : 50=33.07%, 100=66.79%, 250=0.14% 01:20:32.101 cpu : usr=31.53%, sys=1.97%, ctx=915, majf=0, minf=9 01:20:32.101 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=80.8%, 16=16.9%, 32=0.0%, >=64=0.0% 01:20:32.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.101 complete : 0=0.0%, 4=88.3%, 8=11.3%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.101 issued rwts: total=2800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:32.101 latency : target=0, window=0, percentile=100.00%, depth=16 01:20:32.101 filename0: (groupid=0, jobs=1): err= 0: pid=98162: Mon Jul 22 11:17:35 2024 01:20:32.101 read: IOPS=290, BW=1163KiB/s (1191kB/s)(11.4MiB/10026msec) 01:20:32.101 slat (usec): min=3, max=8032, avg=26.62, stdev=276.17 01:20:32.101 clat (msec): min=22, max=104, avg=54.88, stdev=12.73 01:20:32.101 lat (msec): min=22, max=104, avg=54.91, stdev=12.72 01:20:32.101 clat percentiles (msec): 01:20:32.101 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 44], 01:20:32.101 | 30.00th=[ 48], 40.00th=[ 49], 50.00th=[ 56], 60.00th=[ 60], 01:20:32.101 | 70.00th=[ 62], 80.00th=[ 66], 90.00th=[ 71], 95.00th=[ 75], 01:20:32.101 | 99.00th=[ 89], 99.50th=[ 94], 99.90th=[ 105], 99.95th=[ 105], 01:20:32.101 | 99.99th=[ 105] 01:20:32.101 bw ( KiB/s): min= 1016, max= 1344, per=4.26%, avg=1160.00, stdev=80.88, samples=20 01:20:32.101 iops : min= 254, max= 336, avg=290.00, stdev=20.22, samples=20 01:20:32.101 lat (msec) : 50=42.94%, 100=56.93%, 250=0.14% 01:20:32.101 cpu : usr=36.25%, sys=2.38%, ctx=1149, majf=0, minf=9 01:20:32.101 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=82.4%, 16=16.4%, 32=0.0%, >=64=0.0% 01:20:32.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.101 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.101 issued rwts: total=2916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:32.101 latency : target=0, window=0, percentile=100.00%, depth=16 01:20:32.101 filename0: (groupid=0, jobs=1): err= 0: pid=98163: Mon Jul 22 11:17:35 2024 01:20:32.101 read: IOPS=264, BW=1057KiB/s (1082kB/s)(10.4MiB/10052msec) 01:20:32.101 slat (usec): min=4, max=4021, avg=15.01, stdev=78.11 01:20:32.101 clat (msec): min=4, max=104, avg=60.38, stdev=16.33 01:20:32.101 lat (msec): min=4, max=104, avg=60.40, stdev=16.33 01:20:32.101 clat percentiles (msec): 01:20:32.101 | 1.00th=[ 6], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 48], 01:20:32.101 | 30.00th=[ 54], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 65], 01:20:32.101 | 70.00th=[ 68], 80.00th=[ 70], 90.00th=[ 83], 95.00th=[ 87], 01:20:32.101 | 99.00th=[ 100], 99.50th=[ 104], 99.90th=[ 105], 99.95th=[ 105], 01:20:32.101 | 99.99th=[ 105] 01:20:32.101 bw ( KiB/s): min= 784, max= 1690, per=3.88%, avg=1055.30, stdev=208.82, samples=20 01:20:32.101 iops : min= 196, max= 422, avg=263.80, stdev=52.12, samples=20 01:20:32.101 lat (msec) : 10=1.81%, 20=0.08%, 50=22.18%, 100=75.34%, 250=0.60% 01:20:32.101 cpu : usr=41.17%, sys=2.59%, ctx=1500, majf=0, minf=9 01:20:32.101 IO depths : 1=0.2%, 2=3.3%, 4=13.0%, 8=69.1%, 16=14.5%, 32=0.0%, >=64=0.0% 01:20:32.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.101 complete : 0=0.0%, 4=91.1%, 8=6.0%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.101 issued rwts: total=2656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:32.101 latency : target=0, window=0, percentile=100.00%, depth=16 01:20:32.101 filename0: (groupid=0, jobs=1): err= 0: pid=98164: Mon Jul 22 11:17:35 2024 01:20:32.101 read: IOPS=292, BW=1171KiB/s (1199kB/s)(11.5MiB/10018msec) 01:20:32.101 slat (usec): min=2, max=8037, avg=30.40, stdev=330.71 01:20:32.101 clat (msec): min=24, max=102, avg=54.49, stdev=13.37 01:20:32.101 lat (msec): min=24, max=103, avg=54.52, stdev=13.37 01:20:32.101 clat percentiles (msec): 01:20:32.101 | 1.00th=[ 27], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 44], 01:20:32.101 | 30.00th=[ 48], 40.00th=[ 48], 50.00th=[ 57], 60.00th=[ 60], 01:20:32.101 | 70.00th=[ 61], 80.00th=[ 65], 90.00th=[ 72], 95.00th=[ 75], 01:20:32.101 | 99.00th=[ 90], 99.50th=[ 96], 99.90th=[ 104], 99.95th=[ 104], 01:20:32.101 | 99.99th=[ 104] 01:20:32.101 bw ( KiB/s): min= 1000, max= 1368, per=4.29%, avg=1167.58, stdev=98.00, samples=19 01:20:32.101 iops : min= 250, max= 342, avg=291.89, stdev=24.50, samples=19 01:20:32.101 lat (msec) : 50=43.16%, 100=56.73%, 250=0.10% 01:20:32.101 cpu : usr=31.26%, sys=1.93%, ctx=869, majf=0, minf=9 01:20:32.101 IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=83.1%, 16=16.4%, 32=0.0%, >=64=0.0% 01:20:32.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.101 complete : 0=0.0%, 4=87.3%, 8=12.6%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.101 issued rwts: total=2933,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:32.101 latency : target=0, window=0, percentile=100.00%, depth=16 01:20:32.101 filename1: (groupid=0, jobs=1): err= 0: pid=98165: Mon Jul 22 11:17:35 2024 01:20:32.101 read: IOPS=284, BW=1136KiB/s (1163kB/s)(11.1MiB/10038msec) 01:20:32.101 slat (usec): min=6, max=8064, avg=25.66, stdev=225.91 01:20:32.102 clat (msec): min=24, max=101, avg=56.17, stdev=12.51 01:20:32.102 lat (msec): min=24, max=101, avg=56.19, stdev=12.51 01:20:32.102 clat percentiles (msec): 01:20:32.102 | 1.00th=[ 30], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 45], 01:20:32.102 | 30.00th=[ 48], 40.00th=[ 53], 50.00th=[ 58], 60.00th=[ 61], 01:20:32.102 | 70.00th=[ 64], 80.00th=[ 67], 90.00th=[ 71], 95.00th=[ 77], 01:20:32.102 | 99.00th=[ 90], 99.50th=[ 95], 99.90th=[ 102], 99.95th=[ 102], 01:20:32.102 | 99.99th=[ 102] 01:20:32.102 bw ( KiB/s): min= 992, max= 1368, per=4.17%, avg=1133.45, stdev=95.81, samples=20 01:20:32.102 iops : min= 248, max= 342, avg=283.35, stdev=23.93, samples=20 01:20:32.102 lat (msec) : 50=34.51%, 100=65.38%, 250=0.11% 01:20:32.102 cpu : usr=39.51%, sys=2.48%, ctx=1376, majf=0, minf=9 01:20:32.102 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=80.7%, 16=16.5%, 32=0.0%, >=64=0.0% 01:20:32.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.102 complete : 0=0.0%, 4=88.2%, 8=11.3%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.102 issued rwts: total=2851,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:32.102 latency : target=0, window=0, percentile=100.00%, depth=16 01:20:32.102 filename1: (groupid=0, jobs=1): err= 0: pid=98166: Mon Jul 22 11:17:35 2024 01:20:32.102 read: IOPS=277, BW=1109KiB/s (1136kB/s)(10.9MiB/10037msec) 01:20:32.102 slat (usec): min=3, max=8038, avg=21.48, stdev=215.06 01:20:32.102 clat (msec): min=4, max=106, avg=57.51, stdev=14.89 01:20:32.102 lat (msec): min=4, max=106, avg=57.54, stdev=14.90 01:20:32.102 clat percentiles (msec): 01:20:32.102 | 1.00th=[ 5], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 48], 01:20:32.102 | 30.00th=[ 51], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 61], 01:20:32.102 | 70.00th=[ 63], 80.00th=[ 70], 90.00th=[ 72], 95.00th=[ 83], 01:20:32.102 | 99.00th=[ 92], 99.50th=[ 95], 99.90th=[ 104], 99.95th=[ 104], 01:20:32.102 | 99.99th=[ 107] 01:20:32.102 bw ( KiB/s): min= 920, max= 1650, per=4.08%, avg=1108.50, stdev=157.55, samples=20 01:20:32.102 iops : min= 230, max= 412, avg=277.10, stdev=39.30, samples=20 01:20:32.102 lat (msec) : 10=2.30%, 50=26.73%, 100=70.68%, 250=0.29% 01:20:32.102 cpu : usr=31.42%, sys=1.86%, ctx=850, majf=0, minf=9 01:20:32.102 IO depths : 1=0.1%, 2=0.7%, 4=2.3%, 8=79.9%, 16=17.0%, 32=0.0%, >=64=0.0% 01:20:32.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.102 complete : 0=0.0%, 4=88.7%, 8=10.8%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.102 issued rwts: total=2783,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:32.102 latency : target=0, window=0, percentile=100.00%, depth=16 01:20:32.102 filename1: (groupid=0, jobs=1): err= 0: pid=98167: Mon Jul 22 11:17:35 2024 01:20:32.102 read: IOPS=297, BW=1192KiB/s (1220kB/s)(11.6MiB/10006msec) 01:20:32.102 slat (usec): min=2, max=8057, avg=28.24, stdev=255.57 01:20:32.102 clat (msec): min=6, max=151, avg=53.58, stdev=14.64 01:20:32.102 lat (msec): min=6, max=151, avg=53.61, stdev=14.64 01:20:32.102 clat percentiles (msec): 01:20:32.102 | 1.00th=[ 27], 5.00th=[ 34], 10.00th=[ 37], 20.00th=[ 41], 01:20:32.102 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 55], 60.00th=[ 58], 01:20:32.102 | 70.00th=[ 62], 80.00th=[ 65], 90.00th=[ 70], 95.00th=[ 75], 01:20:32.102 | 99.00th=[ 93], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 153], 01:20:32.102 | 99.99th=[ 153] 01:20:32.102 bw ( KiB/s): min= 904, max= 1368, per=4.35%, avg=1182.32, stdev=107.71, samples=19 01:20:32.102 iops : min= 226, max= 342, avg=295.58, stdev=26.93, samples=19 01:20:32.102 lat (msec) : 10=0.20%, 20=0.20%, 50=44.72%, 100=54.24%, 250=0.64% 01:20:32.102 cpu : usr=41.09%, sys=2.58%, ctx=1189, majf=0, minf=9 01:20:32.102 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.8%, 16=15.7%, 32=0.0%, >=64=0.0% 01:20:32.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.102 complete : 0=0.0%, 4=87.1%, 8=12.6%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.102 issued rwts: total=2981,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:32.102 latency : target=0, window=0, percentile=100.00%, depth=16 01:20:32.102 filename1: (groupid=0, jobs=1): err= 0: pid=98168: Mon Jul 22 11:17:35 2024 01:20:32.102 read: IOPS=295, BW=1183KiB/s (1212kB/s)(11.6MiB/10014msec) 01:20:32.102 slat (usec): min=2, max=8027, avg=28.58, stdev=272.61 01:20:32.102 clat (msec): min=22, max=105, avg=53.95, stdev=13.33 01:20:32.102 lat (msec): min=22, max=105, avg=53.98, stdev=13.33 01:20:32.102 clat percentiles (msec): 01:20:32.102 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 42], 01:20:32.102 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 55], 60.00th=[ 59], 01:20:32.102 | 70.00th=[ 62], 80.00th=[ 65], 90.00th=[ 70], 95.00th=[ 75], 01:20:32.102 | 99.00th=[ 93], 99.50th=[ 94], 99.90th=[ 105], 99.95th=[ 106], 01:20:32.102 | 99.99th=[ 106] 01:20:32.102 bw ( KiB/s): min= 1000, max= 1360, per=4.33%, avg=1176.84, stdev=78.92, samples=19 01:20:32.102 iops : min= 250, max= 340, avg=294.21, stdev=19.73, samples=19 01:20:32.102 lat (msec) : 50=44.70%, 100=55.13%, 250=0.17% 01:20:32.102 cpu : usr=41.76%, sys=2.64%, ctx=1430, majf=0, minf=9 01:20:32.102 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=83.0%, 16=16.0%, 32=0.0%, >=64=0.0% 01:20:32.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.102 complete : 0=0.0%, 4=87.2%, 8=12.6%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.102 issued rwts: total=2962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:32.102 latency : target=0, window=0, percentile=100.00%, depth=16 01:20:32.102 filename1: (groupid=0, jobs=1): err= 0: pid=98169: Mon Jul 22 11:17:35 2024 01:20:32.102 read: IOPS=281, BW=1126KiB/s (1153kB/s)(11.0MiB/10016msec) 01:20:32.102 slat (usec): min=3, max=8059, avg=39.05, stdev=409.15 01:20:32.102 clat (msec): min=23, max=104, avg=56.69, stdev=13.46 01:20:32.102 lat (msec): min=23, max=104, avg=56.72, stdev=13.47 01:20:32.102 clat percentiles (msec): 01:20:32.102 | 1.00th=[ 30], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 45], 01:20:32.102 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 59], 60.00th=[ 61], 01:20:32.102 | 70.00th=[ 63], 80.00th=[ 68], 90.00th=[ 72], 95.00th=[ 80], 01:20:32.102 | 99.00th=[ 92], 99.50th=[ 97], 99.90th=[ 105], 99.95th=[ 106], 01:20:32.102 | 99.99th=[ 106] 01:20:32.102 bw ( KiB/s): min= 968, max= 1384, per=4.12%, avg=1120.00, stdev=115.93, samples=19 01:20:32.102 iops : min= 242, max= 346, avg=280.00, stdev=28.98, samples=19 01:20:32.102 lat (msec) : 50=33.17%, 100=66.69%, 250=0.14% 01:20:32.102 cpu : usr=34.91%, sys=2.35%, ctx=1149, majf=0, minf=9 01:20:32.102 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=81.7%, 16=16.7%, 32=0.0%, >=64=0.0% 01:20:32.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.102 complete : 0=0.0%, 4=88.0%, 8=11.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.102 issued rwts: total=2819,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:32.102 latency : target=0, window=0, percentile=100.00%, depth=16 01:20:32.102 filename1: (groupid=0, jobs=1): err= 0: pid=98170: Mon Jul 22 11:17:35 2024 01:20:32.102 read: IOPS=295, BW=1181KiB/s (1210kB/s)(11.5MiB/10010msec) 01:20:32.102 slat (usec): min=2, max=7004, avg=22.95, stdev=181.74 01:20:32.102 clat (msec): min=14, max=149, avg=54.07, stdev=14.85 01:20:32.102 lat (msec): min=14, max=149, avg=54.09, stdev=14.85 01:20:32.102 clat percentiles (msec): 01:20:32.102 | 1.00th=[ 27], 5.00th=[ 34], 10.00th=[ 37], 20.00th=[ 41], 01:20:32.102 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 55], 60.00th=[ 58], 01:20:32.102 | 70.00th=[ 62], 80.00th=[ 66], 90.00th=[ 70], 95.00th=[ 77], 01:20:32.102 | 99.00th=[ 96], 99.50th=[ 134], 99.90th=[ 134], 99.95th=[ 150], 01:20:32.102 | 99.99th=[ 150] 01:20:32.102 bw ( KiB/s): min= 992, max= 1432, per=4.30%, avg=1170.53, stdev=100.35, samples=19 01:20:32.102 iops : min= 248, max= 358, avg=292.63, stdev=25.09, samples=19 01:20:32.102 lat (msec) : 20=0.34%, 50=44.89%, 100=53.99%, 250=0.78% 01:20:32.102 cpu : usr=41.51%, sys=2.69%, ctx=1492, majf=0, minf=9 01:20:32.102 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.1%, 16=16.0%, 32=0.0%, >=64=0.0% 01:20:32.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.102 complete : 0=0.0%, 4=87.1%, 8=12.7%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.102 issued rwts: total=2956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:32.102 latency : target=0, window=0, percentile=100.00%, depth=16 01:20:32.102 filename1: (groupid=0, jobs=1): err= 0: pid=98171: Mon Jul 22 11:17:35 2024 01:20:32.102 read: IOPS=287, BW=1149KiB/s (1177kB/s)(11.2MiB/10022msec) 01:20:32.102 slat (usec): min=3, max=8041, avg=31.00, stdev=333.66 01:20:32.102 clat (msec): min=22, max=131, avg=55.53, stdev=14.12 01:20:32.102 lat (msec): min=23, max=131, avg=55.56, stdev=14.12 01:20:32.102 clat percentiles (msec): 01:20:32.102 | 1.00th=[ 28], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 45], 01:20:32.102 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 58], 60.00th=[ 61], 01:20:32.102 | 70.00th=[ 61], 80.00th=[ 68], 90.00th=[ 72], 95.00th=[ 74], 01:20:32.102 | 99.00th=[ 96], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 132], 01:20:32.102 | 99.99th=[ 132] 01:20:32.102 bw ( KiB/s): min= 992, max= 1320, per=4.22%, avg=1147.35, stdev=97.51, samples=20 01:20:32.102 iops : min= 248, max= 330, avg=286.80, stdev=24.43, samples=20 01:20:32.102 lat (msec) : 50=40.53%, 100=58.63%, 250=0.83% 01:20:32.102 cpu : usr=31.11%, sys=2.07%, ctx=851, majf=0, minf=9 01:20:32.102 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=81.6%, 16=16.1%, 32=0.0%, >=64=0.0% 01:20:32.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.102 complete : 0=0.0%, 4=87.7%, 8=11.9%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.102 issued rwts: total=2879,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:32.102 latency : target=0, window=0, percentile=100.00%, depth=16 01:20:32.102 filename1: (groupid=0, jobs=1): err= 0: pid=98172: Mon Jul 22 11:17:35 2024 01:20:32.102 read: IOPS=282, BW=1130KiB/s (1157kB/s)(11.0MiB/10014msec) 01:20:32.102 slat (usec): min=2, max=8059, avg=31.44, stdev=337.53 01:20:32.102 clat (msec): min=22, max=122, avg=56.48, stdev=14.15 01:20:32.102 lat (msec): min=23, max=122, avg=56.51, stdev=14.15 01:20:32.102 clat percentiles (msec): 01:20:32.102 | 1.00th=[ 28], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 46], 01:20:32.102 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 59], 60.00th=[ 61], 01:20:32.102 | 70.00th=[ 63], 80.00th=[ 70], 90.00th=[ 72], 95.00th=[ 80], 01:20:32.102 | 99.00th=[ 95], 99.50th=[ 110], 99.90th=[ 110], 99.95th=[ 123], 01:20:32.102 | 99.99th=[ 123] 01:20:32.102 bw ( KiB/s): min= 913, max= 1304, per=4.13%, avg=1122.16, stdev=98.80, samples=19 01:20:32.102 iops : min= 228, max= 326, avg=280.53, stdev=24.73, samples=19 01:20:32.102 lat (msec) : 50=38.76%, 100=60.54%, 250=0.71% 01:20:32.102 cpu : usr=31.33%, sys=1.86%, ctx=857, majf=0, minf=9 01:20:32.102 IO depths : 1=0.1%, 2=0.8%, 4=3.0%, 8=80.2%, 16=15.9%, 32=0.0%, >=64=0.0% 01:20:32.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.102 complete : 0=0.0%, 4=88.0%, 8=11.3%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.102 issued rwts: total=2828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:32.102 latency : target=0, window=0, percentile=100.00%, depth=16 01:20:32.102 filename2: (groupid=0, jobs=1): err= 0: pid=98173: Mon Jul 22 11:17:35 2024 01:20:32.102 read: IOPS=289, BW=1160KiB/s (1187kB/s)(11.4MiB/10025msec) 01:20:32.102 slat (usec): min=4, max=8035, avg=25.41, stdev=226.76 01:20:32.102 clat (msec): min=26, max=101, avg=55.07, stdev=12.63 01:20:32.102 lat (msec): min=26, max=101, avg=55.09, stdev=12.63 01:20:32.102 clat percentiles (msec): 01:20:32.102 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 44], 01:20:32.103 | 30.00th=[ 47], 40.00th=[ 52], 50.00th=[ 55], 60.00th=[ 59], 01:20:32.103 | 70.00th=[ 63], 80.00th=[ 66], 90.00th=[ 71], 95.00th=[ 78], 01:20:32.103 | 99.00th=[ 88], 99.50th=[ 94], 99.90th=[ 102], 99.95th=[ 102], 01:20:32.103 | 99.99th=[ 102] 01:20:32.103 bw ( KiB/s): min= 1056, max= 1320, per=4.25%, avg=1156.10, stdev=73.11, samples=20 01:20:32.103 iops : min= 264, max= 330, avg=289.00, stdev=18.28, samples=20 01:20:32.103 lat (msec) : 50=38.82%, 100=61.08%, 250=0.10% 01:20:32.103 cpu : usr=41.83%, sys=2.63%, ctx=1406, majf=0, minf=9 01:20:32.103 IO depths : 1=0.1%, 2=0.7%, 4=2.6%, 8=80.8%, 16=16.0%, 32=0.0%, >=64=0.0% 01:20:32.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.103 complete : 0=0.0%, 4=87.9%, 8=11.5%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.103 issued rwts: total=2906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:32.103 latency : target=0, window=0, percentile=100.00%, depth=16 01:20:32.103 filename2: (groupid=0, jobs=1): err= 0: pid=98174: Mon Jul 22 11:17:35 2024 01:20:32.103 read: IOPS=291, BW=1164KiB/s (1192kB/s)(11.4MiB/10018msec) 01:20:32.103 slat (usec): min=2, max=8048, avg=28.89, stdev=283.26 01:20:32.103 clat (msec): min=22, max=100, avg=54.83, stdev=13.26 01:20:32.103 lat (msec): min=22, max=100, avg=54.86, stdev=13.26 01:20:32.103 clat percentiles (msec): 01:20:32.103 | 1.00th=[ 27], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 43], 01:20:32.103 | 30.00th=[ 47], 40.00th=[ 51], 50.00th=[ 57], 60.00th=[ 60], 01:20:32.103 | 70.00th=[ 62], 80.00th=[ 66], 90.00th=[ 71], 95.00th=[ 77], 01:20:32.103 | 99.00th=[ 91], 99.50th=[ 92], 99.90th=[ 101], 99.95th=[ 101], 01:20:32.103 | 99.99th=[ 101] 01:20:32.103 bw ( KiB/s): min= 1048, max= 1336, per=4.26%, avg=1159.58, stdev=81.52, samples=19 01:20:32.103 iops : min= 262, max= 334, avg=289.89, stdev=20.38, samples=19 01:20:32.103 lat (msec) : 50=39.57%, 100=60.32%, 250=0.10% 01:20:32.103 cpu : usr=37.34%, sys=2.58%, ctx=1138, majf=0, minf=9 01:20:32.103 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.7%, 16=16.3%, 32=0.0%, >=64=0.0% 01:20:32.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.103 complete : 0=0.0%, 4=87.4%, 8=12.4%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.103 issued rwts: total=2916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:32.103 latency : target=0, window=0, percentile=100.00%, depth=16 01:20:32.103 filename2: (groupid=0, jobs=1): err= 0: pid=98175: Mon Jul 22 11:17:35 2024 01:20:32.103 read: IOPS=279, BW=1117KiB/s (1144kB/s)(10.9MiB/10018msec) 01:20:32.103 slat (usec): min=3, max=8038, avg=26.92, stdev=272.80 01:20:32.103 clat (msec): min=25, max=120, avg=57.13, stdev=13.39 01:20:32.103 lat (msec): min=25, max=121, avg=57.15, stdev=13.40 01:20:32.103 clat percentiles (msec): 01:20:32.103 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 47], 01:20:32.103 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 59], 60.00th=[ 61], 01:20:32.103 | 70.00th=[ 63], 80.00th=[ 69], 90.00th=[ 72], 95.00th=[ 75], 01:20:32.103 | 99.00th=[ 96], 99.50th=[ 120], 99.90th=[ 121], 99.95th=[ 122], 01:20:32.103 | 99.99th=[ 122] 01:20:32.103 bw ( KiB/s): min= 896, max= 1280, per=4.10%, avg=1114.70, stdev=100.57, samples=20 01:20:32.103 iops : min= 224, max= 320, avg=278.65, stdev=25.14, samples=20 01:20:32.103 lat (msec) : 50=35.32%, 100=63.89%, 250=0.79% 01:20:32.103 cpu : usr=35.04%, sys=2.44%, ctx=985, majf=0, minf=9 01:20:32.103 IO depths : 1=0.1%, 2=1.6%, 4=6.4%, 8=76.5%, 16=15.5%, 32=0.0%, >=64=0.0% 01:20:32.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.103 complete : 0=0.0%, 4=89.1%, 8=9.5%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.103 issued rwts: total=2797,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:32.103 latency : target=0, window=0, percentile=100.00%, depth=16 01:20:32.103 filename2: (groupid=0, jobs=1): err= 0: pid=98176: Mon Jul 22 11:17:35 2024 01:20:32.103 read: IOPS=285, BW=1143KiB/s (1170kB/s)(11.2MiB/10034msec) 01:20:32.103 slat (usec): min=5, max=8456, avg=26.64, stdev=264.02 01:20:32.103 clat (msec): min=23, max=103, avg=55.85, stdev=12.95 01:20:32.103 lat (msec): min=23, max=103, avg=55.88, stdev=12.95 01:20:32.103 clat percentiles (msec): 01:20:32.103 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 44], 01:20:32.103 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 58], 60.00th=[ 61], 01:20:32.103 | 70.00th=[ 63], 80.00th=[ 67], 90.00th=[ 72], 95.00th=[ 77], 01:20:32.103 | 99.00th=[ 92], 99.50th=[ 95], 99.90th=[ 103], 99.95th=[ 103], 01:20:32.103 | 99.99th=[ 104] 01:20:32.103 bw ( KiB/s): min= 936, max= 1384, per=4.19%, avg=1139.90, stdev=91.73, samples=20 01:20:32.103 iops : min= 234, max= 346, avg=284.95, stdev=22.93, samples=20 01:20:32.103 lat (msec) : 50=39.24%, 100=60.52%, 250=0.24% 01:20:32.103 cpu : usr=38.97%, sys=2.69%, ctx=1209, majf=0, minf=9 01:20:32.103 IO depths : 1=0.1%, 2=0.4%, 4=1.8%, 8=81.3%, 16=16.4%, 32=0.0%, >=64=0.0% 01:20:32.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.103 complete : 0=0.0%, 4=88.0%, 8=11.6%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.103 issued rwts: total=2867,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:32.103 latency : target=0, window=0, percentile=100.00%, depth=16 01:20:32.103 filename2: (groupid=0, jobs=1): err= 0: pid=98177: Mon Jul 22 11:17:35 2024 01:20:32.103 read: IOPS=285, BW=1140KiB/s (1168kB/s)(11.2MiB/10031msec) 01:20:32.103 slat (usec): min=2, max=8022, avg=23.35, stdev=213.75 01:20:32.103 clat (msec): min=24, max=100, avg=55.99, stdev=12.70 01:20:32.103 lat (msec): min=24, max=100, avg=56.02, stdev=12.70 01:20:32.103 clat percentiles (msec): 01:20:32.103 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 45], 01:20:32.103 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 57], 60.00th=[ 60], 01:20:32.103 | 70.00th=[ 63], 80.00th=[ 67], 90.00th=[ 71], 95.00th=[ 77], 01:20:32.103 | 99.00th=[ 90], 99.50th=[ 96], 99.90th=[ 101], 99.95th=[ 101], 01:20:32.103 | 99.99th=[ 101] 01:20:32.103 bw ( KiB/s): min= 1000, max= 1296, per=4.18%, avg=1137.35, stdev=83.72, samples=20 01:20:32.103 iops : min= 250, max= 324, avg=284.30, stdev=20.92, samples=20 01:20:32.103 lat (msec) : 50=34.34%, 100=65.56%, 250=0.10% 01:20:32.103 cpu : usr=40.61%, sys=2.89%, ctx=1407, majf=0, minf=9 01:20:32.103 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=81.3%, 16=16.5%, 32=0.0%, >=64=0.0% 01:20:32.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.103 complete : 0=0.0%, 4=88.0%, 8=11.6%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.103 issued rwts: total=2860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:32.103 latency : target=0, window=0, percentile=100.00%, depth=16 01:20:32.103 filename2: (groupid=0, jobs=1): err= 0: pid=98178: Mon Jul 22 11:17:35 2024 01:20:32.103 read: IOPS=276, BW=1107KiB/s (1134kB/s)(10.9MiB/10034msec) 01:20:32.103 slat (usec): min=5, max=8037, avg=30.05, stdev=339.66 01:20:32.103 clat (msec): min=22, max=106, avg=57.67, stdev=12.79 01:20:32.103 lat (msec): min=22, max=106, avg=57.70, stdev=12.80 01:20:32.103 clat percentiles (msec): 01:20:32.103 | 1.00th=[ 34], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 48], 01:20:32.103 | 30.00th=[ 50], 40.00th=[ 57], 50.00th=[ 59], 60.00th=[ 61], 01:20:32.103 | 70.00th=[ 63], 80.00th=[ 70], 90.00th=[ 72], 95.00th=[ 82], 01:20:32.103 | 99.00th=[ 92], 99.50th=[ 94], 99.90th=[ 96], 99.95th=[ 105], 01:20:32.103 | 99.99th=[ 107] 01:20:32.103 bw ( KiB/s): min= 944, max= 1288, per=4.06%, avg=1104.25, stdev=93.69, samples=20 01:20:32.103 iops : min= 236, max= 322, avg=276.05, stdev=23.40, samples=20 01:20:32.103 lat (msec) : 50=32.15%, 100=67.78%, 250=0.07% 01:20:32.103 cpu : usr=31.41%, sys=1.84%, ctx=866, majf=0, minf=9 01:20:32.103 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=81.4%, 16=17.2%, 32=0.0%, >=64=0.0% 01:20:32.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.103 complete : 0=0.0%, 4=88.3%, 8=11.4%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.103 issued rwts: total=2778,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:32.103 latency : target=0, window=0, percentile=100.00%, depth=16 01:20:32.103 filename2: (groupid=0, jobs=1): err= 0: pid=98179: Mon Jul 22 11:17:35 2024 01:20:32.103 read: IOPS=284, BW=1139KiB/s (1166kB/s)(11.2MiB/10037msec) 01:20:32.103 slat (usec): min=3, max=8022, avg=26.33, stdev=219.15 01:20:32.103 clat (msec): min=6, max=103, avg=56.02, stdev=13.70 01:20:32.103 lat (msec): min=6, max=103, avg=56.05, stdev=13.69 01:20:32.103 clat percentiles (msec): 01:20:32.103 | 1.00th=[ 9], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 45], 01:20:32.103 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 58], 60.00th=[ 61], 01:20:32.103 | 70.00th=[ 64], 80.00th=[ 67], 90.00th=[ 71], 95.00th=[ 79], 01:20:32.103 | 99.00th=[ 92], 99.50th=[ 94], 99.90th=[ 105], 99.95th=[ 105], 01:20:32.103 | 99.99th=[ 105] 01:20:32.103 bw ( KiB/s): min= 944, max= 1408, per=4.19%, avg=1139.20, stdev=106.59, samples=20 01:20:32.103 iops : min= 236, max= 352, avg=284.80, stdev=26.65, samples=20 01:20:32.103 lat (msec) : 10=1.12%, 50=32.31%, 100=66.43%, 250=0.14% 01:20:32.103 cpu : usr=41.62%, sys=2.73%, ctx=1205, majf=0, minf=9 01:20:32.103 IO depths : 1=0.1%, 2=0.6%, 4=2.1%, 8=80.7%, 16=16.5%, 32=0.0%, >=64=0.0% 01:20:32.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.103 complete : 0=0.0%, 4=88.2%, 8=11.4%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.103 issued rwts: total=2857,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:32.103 latency : target=0, window=0, percentile=100.00%, depth=16 01:20:32.103 filename2: (groupid=0, jobs=1): err= 0: pid=98180: Mon Jul 22 11:17:35 2024 01:20:32.103 read: IOPS=276, BW=1106KiB/s (1132kB/s)(10.8MiB/10037msec) 01:20:32.103 slat (usec): min=5, max=8035, avg=25.33, stdev=263.14 01:20:32.103 clat (msec): min=23, max=107, avg=57.78, stdev=12.71 01:20:32.103 lat (msec): min=23, max=115, avg=57.81, stdev=12.72 01:20:32.103 clat percentiles (msec): 01:20:32.103 | 1.00th=[ 29], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 47], 01:20:32.103 | 30.00th=[ 50], 40.00th=[ 58], 50.00th=[ 60], 60.00th=[ 61], 01:20:32.103 | 70.00th=[ 64], 80.00th=[ 69], 90.00th=[ 72], 95.00th=[ 82], 01:20:32.103 | 99.00th=[ 92], 99.50th=[ 94], 99.90th=[ 97], 99.95th=[ 99], 01:20:32.103 | 99.99th=[ 108] 01:20:32.103 bw ( KiB/s): min= 912, max= 1272, per=4.05%, avg=1102.65, stdev=95.91, samples=20 01:20:32.103 iops : min= 228, max= 318, avg=275.65, stdev=23.95, samples=20 01:20:32.103 lat (msec) : 50=31.80%, 100=68.17%, 250=0.04% 01:20:32.103 cpu : usr=34.37%, sys=2.32%, ctx=1004, majf=0, minf=9 01:20:32.103 IO depths : 1=0.1%, 2=1.0%, 4=4.3%, 8=78.3%, 16=16.3%, 32=0.0%, >=64=0.0% 01:20:32.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.103 complete : 0=0.0%, 4=88.9%, 8=10.2%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:32.103 issued rwts: total=2774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:32.103 latency : target=0, window=0, percentile=100.00%, depth=16 01:20:32.103 01:20:32.103 Run status group 0 (all jobs): 01:20:32.103 READ: bw=26.5MiB/s (27.8MB/s), 1057KiB/s-1192KiB/s (1082kB/s-1220kB/s), io=267MiB (280MB), run=10006-10052msec 01:20:32.103 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 01:20:32.103 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 01:20:32.103 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:20:32.103 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 01:20:32.103 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 01:20:32.103 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:20:32.103 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:32.104 bdev_null0 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:32.104 [2024-07-22 11:17:35.465605] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:32.104 bdev_null1 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:20:32.104 { 01:20:32.104 "params": { 01:20:32.104 "name": "Nvme$subsystem", 01:20:32.104 "trtype": "$TEST_TRANSPORT", 01:20:32.104 "traddr": "$NVMF_FIRST_TARGET_IP", 01:20:32.104 "adrfam": "ipv4", 01:20:32.104 "trsvcid": "$NVMF_PORT", 01:20:32.104 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:20:32.104 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:20:32.104 "hdgst": ${hdgst:-false}, 01:20:32.104 "ddgst": ${ddgst:-false} 01:20:32.104 }, 01:20:32.104 "method": "bdev_nvme_attach_controller" 01:20:32.104 } 01:20:32.104 EOF 01:20:32.104 )") 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:20:32.104 { 01:20:32.104 "params": { 01:20:32.104 "name": "Nvme$subsystem", 01:20:32.104 "trtype": "$TEST_TRANSPORT", 01:20:32.104 "traddr": "$NVMF_FIRST_TARGET_IP", 01:20:32.104 "adrfam": "ipv4", 01:20:32.104 "trsvcid": "$NVMF_PORT", 01:20:32.104 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:20:32.104 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:20:32.104 "hdgst": ${hdgst:-false}, 01:20:32.104 "ddgst": ${ddgst:-false} 01:20:32.104 }, 01:20:32.104 "method": "bdev_nvme_attach_controller" 01:20:32.104 } 01:20:32.104 EOF 01:20:32.104 )") 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 01:20:32.104 11:17:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:20:32.104 "params": { 01:20:32.104 "name": "Nvme0", 01:20:32.104 "trtype": "tcp", 01:20:32.104 "traddr": "10.0.0.2", 01:20:32.104 "adrfam": "ipv4", 01:20:32.104 "trsvcid": "4420", 01:20:32.104 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:20:32.104 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:20:32.104 "hdgst": false, 01:20:32.104 "ddgst": false 01:20:32.105 }, 01:20:32.105 "method": "bdev_nvme_attach_controller" 01:20:32.105 },{ 01:20:32.105 "params": { 01:20:32.105 "name": "Nvme1", 01:20:32.105 "trtype": "tcp", 01:20:32.105 "traddr": "10.0.0.2", 01:20:32.105 "adrfam": "ipv4", 01:20:32.105 "trsvcid": "4420", 01:20:32.105 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:20:32.105 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:20:32.105 "hdgst": false, 01:20:32.105 "ddgst": false 01:20:32.105 }, 01:20:32.105 "method": "bdev_nvme_attach_controller" 01:20:32.105 }' 01:20:32.105 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 01:20:32.105 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:20:32.105 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:20:32.105 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:20:32.105 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 01:20:32.105 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:20:32.105 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 01:20:32.105 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:20:32.105 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:20:32.105 11:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:20:32.105 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 01:20:32.105 ... 01:20:32.105 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 01:20:32.105 ... 01:20:32.105 fio-3.35 01:20:32.105 Starting 4 threads 01:20:36.293 01:20:36.293 filename0: (groupid=0, jobs=1): err= 0: pid=98320: Mon Jul 22 11:17:41 2024 01:20:36.293 read: IOPS=2375, BW=18.6MiB/s (19.5MB/s)(92.8MiB/5002msec) 01:20:36.293 slat (nsec): min=5877, max=82066, avg=19996.75, stdev=11397.99 01:20:36.293 clat (usec): min=769, max=14490, avg=3296.51, stdev=720.15 01:20:36.293 lat (usec): min=783, max=14504, avg=3316.50, stdev=721.96 01:20:36.293 clat percentiles (usec): 01:20:36.293 | 1.00th=[ 1631], 5.00th=[ 2147], 10.00th=[ 2376], 20.00th=[ 2737], 01:20:36.293 | 30.00th=[ 2999], 40.00th=[ 3163], 50.00th=[ 3261], 60.00th=[ 3458], 01:20:36.293 | 70.00th=[ 3654], 80.00th=[ 3884], 90.00th=[ 4146], 95.00th=[ 4293], 01:20:36.293 | 99.00th=[ 4686], 99.50th=[ 5145], 99.90th=[ 7504], 99.95th=[10028], 01:20:36.293 | 99.99th=[13829] 01:20:36.293 bw ( KiB/s): min=17456, max=21968, per=23.72%, avg=19329.78, stdev=1378.45, samples=9 01:20:36.293 iops : min= 2182, max= 2746, avg=2416.22, stdev=172.31, samples=9 01:20:36.293 lat (usec) : 1000=0.02% 01:20:36.293 lat (msec) : 2=3.22%, 4=81.10%, 10=15.60%, 20=0.07% 01:20:36.293 cpu : usr=94.04%, sys=5.22%, ctx=7, majf=0, minf=9 01:20:36.293 IO depths : 1=1.6%, 2=13.3%, 4=58.5%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 01:20:36.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:36.293 complete : 0=0.0%, 4=94.7%, 8=5.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:36.293 issued rwts: total=11881,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:36.293 latency : target=0, window=0, percentile=100.00%, depth=8 01:20:36.293 filename0: (groupid=0, jobs=1): err= 0: pid=98321: Mon Jul 22 11:17:41 2024 01:20:36.293 read: IOPS=2744, BW=21.4MiB/s (22.5MB/s)(107MiB/5003msec) 01:20:36.293 slat (usec): min=5, max=136, avg=13.54, stdev= 9.36 01:20:36.293 clat (usec): min=597, max=14370, avg=2876.34, stdev=796.10 01:20:36.293 lat (usec): min=610, max=14384, avg=2889.87, stdev=796.90 01:20:36.293 clat percentiles (usec): 01:20:36.293 | 1.00th=[ 1090], 5.00th=[ 1582], 10.00th=[ 1713], 20.00th=[ 2212], 01:20:36.293 | 30.00th=[ 2671], 40.00th=[ 2802], 50.00th=[ 2933], 60.00th=[ 3130], 01:20:36.293 | 70.00th=[ 3261], 80.00th=[ 3392], 90.00th=[ 3785], 95.00th=[ 4047], 01:20:36.293 | 99.00th=[ 4424], 99.50th=[ 4817], 99.90th=[ 6980], 99.95th=[10028], 01:20:36.293 | 99.99th=[13829] 01:20:36.293 bw ( KiB/s): min=19094, max=23872, per=26.46%, avg=21561.56, stdev=1720.54, samples=9 01:20:36.293 iops : min= 2386, max= 2984, avg=2695.11, stdev=215.20, samples=9 01:20:36.293 lat (usec) : 750=0.07%, 1000=0.60% 01:20:36.293 lat (msec) : 2=15.85%, 4=78.00%, 10=5.43%, 20=0.06% 01:20:36.293 cpu : usr=92.78%, sys=6.30%, ctx=6, majf=0, minf=0 01:20:36.293 IO depths : 1=0.2%, 2=5.9%, 4=62.4%, 8=31.6%, 16=0.0%, 32=0.0%, >=64=0.0% 01:20:36.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:36.293 complete : 0=0.0%, 4=97.7%, 8=2.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:36.293 issued rwts: total=13730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:36.293 latency : target=0, window=0, percentile=100.00%, depth=8 01:20:36.293 filename1: (groupid=0, jobs=1): err= 0: pid=98322: Mon Jul 22 11:17:41 2024 01:20:36.293 read: IOPS=2365, BW=18.5MiB/s (19.4MB/s)(92.5MiB/5004msec) 01:20:36.293 slat (usec): min=6, max=190, avg=19.69, stdev=12.05 01:20:36.293 clat (usec): min=613, max=13654, avg=3310.11, stdev=720.95 01:20:36.293 lat (usec): min=621, max=13662, avg=3329.80, stdev=723.33 01:20:36.293 clat percentiles (usec): 01:20:36.293 | 1.00th=[ 1516], 5.00th=[ 2114], 10.00th=[ 2409], 20.00th=[ 2737], 01:20:36.293 | 30.00th=[ 2999], 40.00th=[ 3195], 50.00th=[ 3261], 60.00th=[ 3458], 01:20:36.293 | 70.00th=[ 3687], 80.00th=[ 3916], 90.00th=[ 4146], 95.00th=[ 4293], 01:20:36.293 | 99.00th=[ 4948], 99.50th=[ 5473], 99.90th=[ 7111], 99.95th=[ 9896], 01:20:36.293 | 99.99th=[11600] 01:20:36.293 bw ( KiB/s): min=17424, max=20864, per=23.59%, avg=19224.89, stdev=1234.16, samples=9 01:20:36.293 iops : min= 2178, max= 2608, avg=2403.11, stdev=154.27, samples=9 01:20:36.293 lat (usec) : 750=0.12%, 1000=0.06% 01:20:36.293 lat (msec) : 2=3.17%, 4=80.33%, 10=16.30%, 20=0.03% 01:20:36.293 cpu : usr=93.60%, sys=5.34%, ctx=73, majf=0, minf=0 01:20:36.293 IO depths : 1=1.6%, 2=13.6%, 4=58.4%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 01:20:36.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:36.293 complete : 0=0.0%, 4=94.6%, 8=5.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:36.293 issued rwts: total=11838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:36.293 latency : target=0, window=0, percentile=100.00%, depth=8 01:20:36.293 filename1: (groupid=0, jobs=1): err= 0: pid=98323: Mon Jul 22 11:17:41 2024 01:20:36.293 read: IOPS=2704, BW=21.1MiB/s (22.2MB/s)(106MiB/5001msec) 01:20:36.293 slat (nsec): min=5801, max=72399, avg=13924.86, stdev=7840.09 01:20:36.293 clat (usec): min=440, max=14523, avg=2918.85, stdev=1017.41 01:20:36.293 lat (usec): min=453, max=14537, avg=2932.77, stdev=1018.61 01:20:36.293 clat percentiles (usec): 01:20:36.293 | 1.00th=[ 1029], 5.00th=[ 1532], 10.00th=[ 1614], 20.00th=[ 1795], 01:20:36.293 | 30.00th=[ 2606], 40.00th=[ 2769], 50.00th=[ 2933], 60.00th=[ 3097], 01:20:36.293 | 70.00th=[ 3261], 80.00th=[ 3589], 90.00th=[ 4228], 95.00th=[ 4883], 01:20:36.293 | 99.00th=[ 5276], 99.50th=[ 5407], 99.90th=[ 7046], 99.95th=[10028], 01:20:36.293 | 99.99th=[13829] 01:20:36.293 bw ( KiB/s): min=14096, max=25584, per=26.01%, avg=21196.44, stdev=4278.66, samples=9 01:20:36.293 iops : min= 1762, max= 3198, avg=2649.56, stdev=534.83, samples=9 01:20:36.293 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.68% 01:20:36.293 lat (msec) : 2=22.40%, 4=63.30%, 10=13.52%, 20=0.05% 01:20:36.293 cpu : usr=91.54%, sys=7.64%, ctx=7, majf=0, minf=9 01:20:36.293 IO depths : 1=0.2%, 2=5.5%, 4=61.8%, 8=32.5%, 16=0.0%, 32=0.0%, >=64=0.0% 01:20:36.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:36.293 complete : 0=0.0%, 4=97.9%, 8=2.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:36.293 issued rwts: total=13525,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:36.293 latency : target=0, window=0, percentile=100.00%, depth=8 01:20:36.293 01:20:36.293 Run status group 0 (all jobs): 01:20:36.293 READ: bw=79.6MiB/s (83.4MB/s), 18.5MiB/s-21.4MiB/s (19.4MB/s-22.5MB/s), io=398MiB (418MB), run=5001-5004msec 01:20:36.293 11:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 01:20:36.293 11:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 01:20:36.293 11:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:20:36.293 11:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 01:20:36.293 11:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 01:20:36.293 11:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:20:36.293 11:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:36.293 11:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:36.293 11:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:36.293 11:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:20:36.293 11:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:36.293 11:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:36.552 11:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:36.552 11:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:20:36.552 11:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 01:20:36.552 11:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 01:20:36.552 11:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:20:36.552 11:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:36.552 11:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:36.552 11:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:36.552 11:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 01:20:36.552 11:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:36.552 11:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:36.552 ************************************ 01:20:36.552 END TEST fio_dif_rand_params 01:20:36.552 ************************************ 01:20:36.552 11:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:36.552 01:20:36.552 real 0m23.386s 01:20:36.552 user 2m2.495s 01:20:36.552 sys 0m9.070s 01:20:36.552 11:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 01:20:36.552 11:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:20:36.552 11:17:41 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 01:20:36.552 11:17:41 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 01:20:36.552 11:17:41 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:20:36.552 11:17:41 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 01:20:36.552 11:17:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:20:36.552 ************************************ 01:20:36.552 START TEST fio_dif_digest 01:20:36.552 ************************************ 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:20:36.552 bdev_null0 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:20:36.552 [2024-07-22 11:17:41.646098] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 01:20:36.552 11:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:20:36.553 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:20:36.553 11:17:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:20:36.553 11:17:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:20:36.553 { 01:20:36.553 "params": { 01:20:36.553 "name": "Nvme$subsystem", 01:20:36.553 "trtype": "$TEST_TRANSPORT", 01:20:36.553 "traddr": "$NVMF_FIRST_TARGET_IP", 01:20:36.553 "adrfam": "ipv4", 01:20:36.553 "trsvcid": "$NVMF_PORT", 01:20:36.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:20:36.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:20:36.553 "hdgst": ${hdgst:-false}, 01:20:36.553 "ddgst": ${ddgst:-false} 01:20:36.553 }, 01:20:36.553 "method": "bdev_nvme_attach_controller" 01:20:36.553 } 01:20:36.553 EOF 01:20:36.553 )") 01:20:36.553 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 01:20:36.553 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:20:36.553 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 01:20:36.553 11:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 01:20:36.553 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:20:36.553 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 01:20:36.553 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 01:20:36.553 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:20:36.553 11:17:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 01:20:36.553 11:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 01:20:36.553 11:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 01:20:36.553 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 01:20:36.553 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:20:36.553 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:20:36.553 11:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 01:20:36.553 11:17:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 01:20:36.553 11:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 01:20:36.553 11:17:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 01:20:36.553 11:17:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:20:36.553 "params": { 01:20:36.553 "name": "Nvme0", 01:20:36.553 "trtype": "tcp", 01:20:36.553 "traddr": "10.0.0.2", 01:20:36.553 "adrfam": "ipv4", 01:20:36.553 "trsvcid": "4420", 01:20:36.553 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:20:36.553 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:20:36.553 "hdgst": true, 01:20:36.553 "ddgst": true 01:20:36.553 }, 01:20:36.553 "method": "bdev_nvme_attach_controller" 01:20:36.553 }' 01:20:36.553 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 01:20:36.553 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:20:36.553 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:20:36.553 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 01:20:36.553 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:20:36.553 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:20:36.553 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 01:20:36.553 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:20:36.553 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:20:36.553 11:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:20:36.811 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 01:20:36.811 ... 01:20:36.811 fio-3.35 01:20:36.811 Starting 3 threads 01:20:49.013 01:20:49.013 filename0: (groupid=0, jobs=1): err= 0: pid=98429: Mon Jul 22 11:17:52 2024 01:20:49.013 read: IOPS=287, BW=35.9MiB/s (37.7MB/s)(359MiB/10004msec) 01:20:49.013 slat (nsec): min=6025, max=39433, avg=15895.97, stdev=5732.21 01:20:49.013 clat (usec): min=6590, max=11763, avg=10405.42, stdev=279.81 01:20:49.013 lat (usec): min=6598, max=11778, avg=10421.31, stdev=281.29 01:20:49.013 clat percentiles (usec): 01:20:49.013 | 1.00th=[10159], 5.00th=[10159], 10.00th=[10159], 20.00th=[10159], 01:20:49.013 | 30.00th=[10290], 40.00th=[10290], 50.00th=[10290], 60.00th=[10421], 01:20:49.013 | 70.00th=[10552], 80.00th=[10552], 90.00th=[10683], 95.00th=[10945], 01:20:49.013 | 99.00th=[11207], 99.50th=[11338], 99.90th=[11731], 99.95th=[11731], 01:20:49.013 | 99.99th=[11731] 01:20:49.013 bw ( KiB/s): min=35328, max=37632, per=33.33%, avg=36742.74, stdev=735.89, samples=19 01:20:49.013 iops : min= 276, max= 294, avg=287.05, stdev= 5.75, samples=19 01:20:49.013 lat (msec) : 10=0.21%, 20=99.79% 01:20:49.013 cpu : usr=89.56%, sys=10.00%, ctx=13, majf=0, minf=0 01:20:49.013 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:20:49.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:49.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:49.013 issued rwts: total=2874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:49.013 latency : target=0, window=0, percentile=100.00%, depth=3 01:20:49.013 filename0: (groupid=0, jobs=1): err= 0: pid=98430: Mon Jul 22 11:17:52 2024 01:20:49.013 read: IOPS=286, BW=35.9MiB/s (37.6MB/s)(359MiB/10005msec) 01:20:49.013 slat (nsec): min=6120, max=45026, avg=16490.85, stdev=5075.16 01:20:49.013 clat (usec): min=10064, max=14451, avg=10414.97, stdev=275.54 01:20:49.013 lat (usec): min=10078, max=14482, avg=10431.47, stdev=277.04 01:20:49.013 clat percentiles (usec): 01:20:49.013 | 1.00th=[10159], 5.00th=[10159], 10.00th=[10159], 20.00th=[10159], 01:20:49.013 | 30.00th=[10290], 40.00th=[10290], 50.00th=[10290], 60.00th=[10421], 01:20:49.013 | 70.00th=[10552], 80.00th=[10552], 90.00th=[10683], 95.00th=[10945], 01:20:49.013 | 99.00th=[11207], 99.50th=[11338], 99.90th=[14484], 99.95th=[14484], 01:20:49.013 | 99.99th=[14484] 01:20:49.013 bw ( KiB/s): min=35328, max=37632, per=33.30%, avg=36702.32, stdev=792.32, samples=19 01:20:49.013 iops : min= 276, max= 294, avg=286.74, stdev= 6.19, samples=19 01:20:49.013 lat (msec) : 20=100.00% 01:20:49.013 cpu : usr=89.63%, sys=9.92%, ctx=111, majf=0, minf=9 01:20:49.013 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:20:49.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:49.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:49.013 issued rwts: total=2871,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:49.013 latency : target=0, window=0, percentile=100.00%, depth=3 01:20:49.013 filename0: (groupid=0, jobs=1): err= 0: pid=98431: Mon Jul 22 11:17:52 2024 01:20:49.013 read: IOPS=286, BW=35.9MiB/s (37.6MB/s)(359MiB/10004msec) 01:20:49.013 slat (nsec): min=6026, max=41650, avg=16602.99, stdev=5181.56 01:20:49.013 clat (usec): min=10113, max=13923, avg=10414.63, stdev=267.24 01:20:49.013 lat (usec): min=10127, max=13951, avg=10431.24, stdev=268.47 01:20:49.013 clat percentiles (usec): 01:20:49.013 | 1.00th=[10159], 5.00th=[10159], 10.00th=[10159], 20.00th=[10159], 01:20:49.013 | 30.00th=[10290], 40.00th=[10290], 50.00th=[10290], 60.00th=[10421], 01:20:49.013 | 70.00th=[10552], 80.00th=[10552], 90.00th=[10683], 95.00th=[10945], 01:20:49.013 | 99.00th=[11207], 99.50th=[11338], 99.90th=[13960], 99.95th=[13960], 01:20:49.013 | 99.99th=[13960] 01:20:49.013 bw ( KiB/s): min=35328, max=37632, per=33.30%, avg=36706.00, stdev=785.71, samples=19 01:20:49.013 iops : min= 276, max= 294, avg=286.74, stdev= 6.19, samples=19 01:20:49.013 lat (msec) : 20=100.00% 01:20:49.013 cpu : usr=90.12%, sys=9.46%, ctx=17, majf=0, minf=9 01:20:49.013 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:20:49.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:49.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:49.013 issued rwts: total=2871,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:49.013 latency : target=0, window=0, percentile=100.00%, depth=3 01:20:49.013 01:20:49.013 Run status group 0 (all jobs): 01:20:49.013 READ: bw=108MiB/s (113MB/s), 35.9MiB/s-35.9MiB/s (37.6MB/s-37.7MB/s), io=1077MiB (1129MB), run=10004-10005msec 01:20:49.013 11:17:52 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 01:20:49.013 11:17:52 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 01:20:49.013 11:17:52 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 01:20:49.014 11:17:52 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 01:20:49.014 11:17:52 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 01:20:49.014 11:17:52 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:20:49.014 11:17:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:49.014 11:17:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:20:49.014 11:17:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:49.014 11:17:52 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:20:49.014 11:17:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:49.014 11:17:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:20:49.014 ************************************ 01:20:49.014 END TEST fio_dif_digest 01:20:49.014 ************************************ 01:20:49.014 11:17:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:49.014 01:20:49.014 real 0m11.011s 01:20:49.014 user 0m27.552s 01:20:49.014 sys 0m3.300s 01:20:49.014 11:17:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 01:20:49.014 11:17:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:20:49.014 11:17:52 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 01:20:49.014 11:17:52 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 01:20:49.014 11:17:52 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 01:20:49.014 11:17:52 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 01:20:49.014 11:17:52 nvmf_dif -- nvmf/common.sh@117 -- # sync 01:20:49.014 11:17:52 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:20:49.014 11:17:52 nvmf_dif -- nvmf/common.sh@120 -- # set +e 01:20:49.014 11:17:52 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 01:20:49.014 11:17:52 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:20:49.014 rmmod nvme_tcp 01:20:49.014 rmmod nvme_fabrics 01:20:49.014 rmmod nvme_keyring 01:20:49.014 11:17:52 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:20:49.014 11:17:52 nvmf_dif -- nvmf/common.sh@124 -- # set -e 01:20:49.014 11:17:52 nvmf_dif -- nvmf/common.sh@125 -- # return 0 01:20:49.014 11:17:52 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 97672 ']' 01:20:49.014 11:17:52 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 97672 01:20:49.014 11:17:52 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 97672 ']' 01:20:49.014 11:17:52 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 97672 01:20:49.014 11:17:52 nvmf_dif -- common/autotest_common.sh@953 -- # uname 01:20:49.014 11:17:52 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:20:49.014 11:17:52 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97672 01:20:49.014 killing process with pid 97672 01:20:49.014 11:17:52 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:20:49.014 11:17:52 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:20:49.014 11:17:52 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97672' 01:20:49.014 11:17:52 nvmf_dif -- common/autotest_common.sh@967 -- # kill 97672 01:20:49.014 11:17:52 nvmf_dif -- common/autotest_common.sh@972 -- # wait 97672 01:20:49.014 11:17:53 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 01:20:49.014 11:17:53 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:20:49.014 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:20:49.014 Waiting for block devices as requested 01:20:49.014 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:20:49.014 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:20:49.014 11:17:53 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:20:49.014 11:17:53 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:20:49.014 11:17:53 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:20:49.014 11:17:53 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 01:20:49.014 11:17:53 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:20:49.014 11:17:53 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:20:49.014 11:17:53 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:20:49.014 11:17:53 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:20:49.014 ************************************ 01:20:49.014 END TEST nvmf_dif 01:20:49.014 ************************************ 01:20:49.014 01:20:49.014 real 1m0.262s 01:20:49.014 user 3m45.735s 01:20:49.014 sys 0m22.561s 01:20:49.014 11:17:53 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 01:20:49.014 11:17:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:20:49.014 11:17:53 -- common/autotest_common.sh@1142 -- # return 0 01:20:49.014 11:17:53 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 01:20:49.014 11:17:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:20:49.014 11:17:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:20:49.014 11:17:53 -- common/autotest_common.sh@10 -- # set +x 01:20:49.014 ************************************ 01:20:49.014 START TEST nvmf_abort_qd_sizes 01:20:49.014 ************************************ 01:20:49.014 11:17:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 01:20:49.014 * Looking for test storage... 01:20:49.014 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:20:49.014 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:20:49.015 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:20:49.015 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:20:49.015 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:20:49.015 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:20:49.015 Cannot find device "nvmf_tgt_br" 01:20:49.015 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 01:20:49.015 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:20:49.015 Cannot find device "nvmf_tgt_br2" 01:20:49.015 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 01:20:49.015 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:20:49.015 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:20:49.015 Cannot find device "nvmf_tgt_br" 01:20:49.015 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 01:20:49.015 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:20:49.015 Cannot find device "nvmf_tgt_br2" 01:20:49.015 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 01:20:49.015 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:20:49.273 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:20:49.273 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:20:49.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:20:49.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 01:20:49.273 01:20:49.273 --- 10.0.0.2 ping statistics --- 01:20:49.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:20:49.273 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:20:49.273 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:20:49.273 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 01:20:49.273 01:20:49.273 --- 10.0.0.3 ping statistics --- 01:20:49.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:20:49.273 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:20:49.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:20:49.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 01:20:49.273 01:20:49.273 --- 10.0.0.1 ping statistics --- 01:20:49.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:20:49.273 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 01:20:49.273 11:17:54 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:20:50.207 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:20:50.207 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:20:50.466 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:20:50.466 11:17:55 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:20:50.466 11:17:55 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:20:50.466 11:17:55 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:20:50.466 11:17:55 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:20:50.466 11:17:55 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:20:50.466 11:17:55 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:20:50.466 11:17:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 01:20:50.466 11:17:55 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:20:50.466 11:17:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 01:20:50.466 11:17:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:20:50.466 11:17:55 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=99027 01:20:50.466 11:17:55 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 01:20:50.466 11:17:55 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 99027 01:20:50.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:20:50.466 11:17:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 99027 ']' 01:20:50.466 11:17:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:20:50.466 11:17:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 01:20:50.466 11:17:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:20:50.466 11:17:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 01:20:50.466 11:17:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:20:50.466 [2024-07-22 11:17:55.593274] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:20:50.466 [2024-07-22 11:17:55.593444] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:20:50.724 [2024-07-22 11:17:55.751189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:20:50.724 [2024-07-22 11:17:55.801305] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:20:50.724 [2024-07-22 11:17:55.801539] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:20:50.724 [2024-07-22 11:17:55.801628] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:20:50.724 [2024-07-22 11:17:55.801674] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:20:50.724 [2024-07-22 11:17:55.801699] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:20:50.724 [2024-07-22 11:17:55.802703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:20:50.724 [2024-07-22 11:17:55.802800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:20:50.724 [2024-07-22 11:17:55.802863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:20:50.725 [2024-07-22 11:17:55.802869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:20:50.725 [2024-07-22 11:17:55.844323] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:20:51.290 11:17:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:20:51.290 11:17:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 01:20:51.290 11:17:56 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:20:51.290 11:17:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 01:20:51.290 11:17:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 01:20:51.549 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 01:20:51.550 11:17:56 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 01:20:51.550 11:17:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 01:20:51.550 11:17:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 01:20:51.550 11:17:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 01:20:51.550 11:17:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:20:51.550 11:17:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 01:20:51.550 11:17:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:20:51.550 ************************************ 01:20:51.550 START TEST spdk_target_abort 01:20:51.550 ************************************ 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:20:51.550 spdk_targetn1 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:20:51.550 [2024-07-22 11:17:56.674097] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:20:51.550 [2024-07-22 11:17:56.714182] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:20:51.550 11:17:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:20:54.827 Initializing NVMe Controllers 01:20:54.827 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 01:20:54.827 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:20:54.827 Initialization complete. Launching workers. 01:20:54.827 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15475, failed: 0 01:20:54.827 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1079, failed to submit 14396 01:20:54.827 success 766, unsuccess 313, failed 0 01:20:54.827 11:17:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:20:54.827 11:17:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:20:58.111 Initializing NVMe Controllers 01:20:58.111 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 01:20:58.111 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:20:58.111 Initialization complete. Launching workers. 01:20:58.111 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8976, failed: 0 01:20:58.111 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1197, failed to submit 7779 01:20:58.111 success 373, unsuccess 824, failed 0 01:20:58.111 11:18:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:20:58.111 11:18:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:21:01.397 Initializing NVMe Controllers 01:21:01.397 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 01:21:01.397 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:21:01.397 Initialization complete. Launching workers. 01:21:01.397 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34789, failed: 0 01:21:01.397 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2473, failed to submit 32316 01:21:01.397 success 509, unsuccess 1964, failed 0 01:21:01.397 11:18:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 01:21:01.397 11:18:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:01.397 11:18:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:21:01.397 11:18:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:01.397 11:18:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 01:21:01.397 11:18:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:01.397 11:18:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:21:01.655 11:18:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:01.656 11:18:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 99027 01:21:01.656 11:18:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 99027 ']' 01:21:01.656 11:18:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 99027 01:21:01.656 11:18:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 01:21:01.656 11:18:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:21:01.656 11:18:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99027 01:21:01.914 killing process with pid 99027 01:21:01.914 11:18:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:21:01.914 11:18:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:21:01.914 11:18:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99027' 01:21:01.914 11:18:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 99027 01:21:01.914 11:18:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 99027 01:21:01.914 ************************************ 01:21:01.914 END TEST spdk_target_abort 01:21:01.914 ************************************ 01:21:01.914 01:21:01.914 real 0m10.467s 01:21:01.914 user 0m41.551s 01:21:01.914 sys 0m2.677s 01:21:01.914 11:18:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 01:21:01.914 11:18:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:21:02.173 11:18:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 01:21:02.173 11:18:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 01:21:02.173 11:18:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:21:02.173 11:18:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 01:21:02.173 11:18:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:21:02.173 ************************************ 01:21:02.173 START TEST kernel_target_abort 01:21:02.173 ************************************ 01:21:02.173 11:18:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 01:21:02.173 11:18:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 01:21:02.173 11:18:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 01:21:02.173 11:18:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:02.173 11:18:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:02.173 11:18:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:02.173 11:18:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:02.173 11:18:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:02.173 11:18:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:02.173 11:18:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:02.173 11:18:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:02.173 11:18:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:02.173 11:18:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 01:21:02.173 11:18:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 01:21:02.173 11:18:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 01:21:02.173 11:18:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:21:02.173 11:18:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:21:02.173 11:18:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 01:21:02.173 11:18:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 01:21:02.173 11:18:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 01:21:02.173 11:18:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 01:21:02.173 11:18:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 01:21:02.173 11:18:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:21:02.776 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:21:02.776 Waiting for block devices as requested 01:21:02.776 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:21:02.776 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 01:21:03.034 No valid GPT data, bailing 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 01:21:03.034 No valid GPT data, bailing 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 01:21:03.034 No valid GPT data, bailing 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 01:21:03.034 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 01:21:03.293 No valid GPT data, bailing 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb --hostid=7758934d-ca6b-403e-9e5d-3518ecb16acb -a 10.0.0.1 -t tcp -s 4420 01:21:03.293 01:21:03.293 Discovery Log Number of Records 2, Generation counter 2 01:21:03.293 =====Discovery Log Entry 0====== 01:21:03.293 trtype: tcp 01:21:03.293 adrfam: ipv4 01:21:03.293 subtype: current discovery subsystem 01:21:03.293 treq: not specified, sq flow control disable supported 01:21:03.293 portid: 1 01:21:03.293 trsvcid: 4420 01:21:03.293 subnqn: nqn.2014-08.org.nvmexpress.discovery 01:21:03.293 traddr: 10.0.0.1 01:21:03.293 eflags: none 01:21:03.293 sectype: none 01:21:03.293 =====Discovery Log Entry 1====== 01:21:03.293 trtype: tcp 01:21:03.293 adrfam: ipv4 01:21:03.293 subtype: nvme subsystem 01:21:03.293 treq: not specified, sq flow control disable supported 01:21:03.293 portid: 1 01:21:03.293 trsvcid: 4420 01:21:03.293 subnqn: nqn.2016-06.io.spdk:testnqn 01:21:03.293 traddr: 10.0.0.1 01:21:03.293 eflags: none 01:21:03.293 sectype: none 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:21:03.293 11:18:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:21:06.607 Initializing NVMe Controllers 01:21:06.607 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:21:06.607 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:21:06.607 Initialization complete. Launching workers. 01:21:06.607 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 44049, failed: 0 01:21:06.607 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 44049, failed to submit 0 01:21:06.607 success 0, unsuccess 44049, failed 0 01:21:06.607 11:18:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:21:06.607 11:18:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:21:09.890 Initializing NVMe Controllers 01:21:09.890 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:21:09.890 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:21:09.890 Initialization complete. Launching workers. 01:21:09.890 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 77900, failed: 0 01:21:09.890 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37534, failed to submit 40366 01:21:09.890 success 0, unsuccess 37534, failed 0 01:21:09.890 11:18:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:21:09.890 11:18:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:21:13.323 Initializing NVMe Controllers 01:21:13.323 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:21:13.323 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:21:13.323 Initialization complete. Launching workers. 01:21:13.323 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 111256, failed: 0 01:21:13.323 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27766, failed to submit 83490 01:21:13.323 success 0, unsuccess 27766, failed 0 01:21:13.323 11:18:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 01:21:13.323 11:18:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 01:21:13.323 11:18:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 01:21:13.323 11:18:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 01:21:13.323 11:18:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:21:13.323 11:18:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 01:21:13.323 11:18:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:21:13.323 11:18:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 01:21:13.323 11:18:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 01:21:13.323 11:18:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:21:13.582 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:21:15.486 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:21:15.486 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:21:15.486 01:21:15.486 real 0m13.405s 01:21:15.486 user 0m6.403s 01:21:15.486 sys 0m4.350s 01:21:15.486 11:18:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 01:21:15.486 11:18:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 01:21:15.486 ************************************ 01:21:15.486 END TEST kernel_target_abort 01:21:15.486 ************************************ 01:21:15.486 11:18:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 01:21:15.486 11:18:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 01:21:15.486 11:18:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 01:21:15.486 11:18:20 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 01:21:15.486 11:18:20 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 01:21:15.486 11:18:20 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:21:15.486 11:18:20 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 01:21:15.486 11:18:20 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 01:21:15.486 11:18:20 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:21:15.486 rmmod nvme_tcp 01:21:15.486 rmmod nvme_fabrics 01:21:15.486 rmmod nvme_keyring 01:21:15.745 11:18:20 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:21:15.745 11:18:20 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 01:21:15.745 11:18:20 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 01:21:15.745 11:18:20 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 99027 ']' 01:21:15.745 11:18:20 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 99027 01:21:15.745 11:18:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 99027 ']' 01:21:15.745 Process with pid 99027 is not found 01:21:15.745 11:18:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 99027 01:21:15.745 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (99027) - No such process 01:21:15.745 11:18:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 99027 is not found' 01:21:15.745 11:18:20 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 01:21:15.745 11:18:20 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:21:16.005 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:21:16.005 Waiting for block devices as requested 01:21:16.265 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:21:16.265 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:21:16.265 11:18:21 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:21:16.265 11:18:21 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:21:16.265 11:18:21 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:21:16.265 11:18:21 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 01:21:16.265 11:18:21 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:21:16.265 11:18:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:21:16.265 11:18:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:21:16.524 11:18:21 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:21:16.524 ************************************ 01:21:16.524 END TEST nvmf_abort_qd_sizes 01:21:16.524 ************************************ 01:21:16.524 01:21:16.524 real 0m27.552s 01:21:16.524 user 0m49.155s 01:21:16.524 sys 0m8.848s 01:21:16.524 11:18:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 01:21:16.524 11:18:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:21:16.524 11:18:21 -- common/autotest_common.sh@1142 -- # return 0 01:21:16.524 11:18:21 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 01:21:16.524 11:18:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:21:16.524 11:18:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:21:16.524 11:18:21 -- common/autotest_common.sh@10 -- # set +x 01:21:16.524 ************************************ 01:21:16.524 START TEST keyring_file 01:21:16.524 ************************************ 01:21:16.524 11:18:21 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 01:21:16.524 * Looking for test storage... 01:21:16.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 01:21:16.524 11:18:21 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 01:21:16.524 11:18:21 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:21:16.524 11:18:21 keyring_file -- nvmf/common.sh@7 -- # uname -s 01:21:16.524 11:18:21 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:21:16.524 11:18:21 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:21:16.524 11:18:21 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:21:16.524 11:18:21 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:21:16.524 11:18:21 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:21:16.524 11:18:21 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:21:16.524 11:18:21 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:21:16.524 11:18:21 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:21:16.524 11:18:21 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:21:16.524 11:18:21 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:21:16.525 11:18:21 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:21:16.525 11:18:21 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:21:16.525 11:18:21 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:21:16.525 11:18:21 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:21:16.525 11:18:21 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:21:16.525 11:18:21 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:21:16.525 11:18:21 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:21:16.525 11:18:21 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:21:16.525 11:18:21 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:21:16.525 11:18:21 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:21:16.525 11:18:21 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:16.525 11:18:21 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:16.525 11:18:21 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:16.525 11:18:21 keyring_file -- paths/export.sh@5 -- # export PATH 01:21:16.525 11:18:21 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:16.525 11:18:21 keyring_file -- nvmf/common.sh@47 -- # : 0 01:21:16.525 11:18:21 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:21:16.525 11:18:21 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:21:16.525 11:18:21 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:21:16.525 11:18:21 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:21:16.525 11:18:21 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:21:16.525 11:18:21 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:21:16.525 11:18:21 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:21:16.525 11:18:21 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 01:21:16.785 11:18:21 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 01:21:16.785 11:18:21 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 01:21:16.785 11:18:21 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 01:21:16.785 11:18:21 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 01:21:16.785 11:18:21 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 01:21:16.785 11:18:21 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 01:21:16.785 11:18:21 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 01:21:16.785 11:18:21 keyring_file -- keyring/common.sh@15 -- # local name key digest path 01:21:16.785 11:18:21 keyring_file -- keyring/common.sh@17 -- # name=key0 01:21:16.785 11:18:21 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 01:21:16.785 11:18:21 keyring_file -- keyring/common.sh@17 -- # digest=0 01:21:16.785 11:18:21 keyring_file -- keyring/common.sh@18 -- # mktemp 01:21:16.785 11:18:21 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FghGm9UmS7 01:21:16.785 11:18:21 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 01:21:16.785 11:18:21 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 01:21:16.785 11:18:21 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 01:21:16.785 11:18:21 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 01:21:16.785 11:18:21 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 01:21:16.785 11:18:21 keyring_file -- nvmf/common.sh@704 -- # digest=0 01:21:16.785 11:18:21 keyring_file -- nvmf/common.sh@705 -- # python - 01:21:16.785 11:18:21 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FghGm9UmS7 01:21:16.785 11:18:21 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FghGm9UmS7 01:21:16.785 11:18:21 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.FghGm9UmS7 01:21:16.785 11:18:21 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 01:21:16.785 11:18:21 keyring_file -- keyring/common.sh@15 -- # local name key digest path 01:21:16.785 11:18:21 keyring_file -- keyring/common.sh@17 -- # name=key1 01:21:16.785 11:18:21 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 01:21:16.785 11:18:21 keyring_file -- keyring/common.sh@17 -- # digest=0 01:21:16.785 11:18:21 keyring_file -- keyring/common.sh@18 -- # mktemp 01:21:16.785 11:18:21 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.yZJ7PPWjnV 01:21:16.785 11:18:21 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 01:21:16.785 11:18:21 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 01:21:16.785 11:18:21 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 01:21:16.785 11:18:21 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 01:21:16.785 11:18:21 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 01:21:16.785 11:18:21 keyring_file -- nvmf/common.sh@704 -- # digest=0 01:21:16.785 11:18:21 keyring_file -- nvmf/common.sh@705 -- # python - 01:21:16.785 11:18:21 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.yZJ7PPWjnV 01:21:16.785 11:18:21 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.yZJ7PPWjnV 01:21:16.785 11:18:21 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.yZJ7PPWjnV 01:21:16.785 11:18:21 keyring_file -- keyring/file.sh@30 -- # tgtpid=99908 01:21:16.785 11:18:21 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:21:16.785 11:18:21 keyring_file -- keyring/file.sh@32 -- # waitforlisten 99908 01:21:16.785 11:18:21 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 99908 ']' 01:21:16.785 11:18:21 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:21:16.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:21:16.785 11:18:21 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 01:21:16.785 11:18:21 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:21:16.785 11:18:21 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 01:21:16.785 11:18:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:21:16.785 [2024-07-22 11:18:21.925293] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:21:16.785 [2024-07-22 11:18:21.925359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99908 ] 01:21:17.044 [2024-07-22 11:18:22.052741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:21:17.044 [2024-07-22 11:18:22.096363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:21:17.044 [2024-07-22 11:18:22.137470] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:21:17.611 11:18:22 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:21:17.611 11:18:22 keyring_file -- common/autotest_common.sh@862 -- # return 0 01:21:17.611 11:18:22 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 01:21:17.611 11:18:22 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:17.611 11:18:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:21:17.611 [2024-07-22 11:18:22.796233] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:21:17.611 null0 01:21:17.869 [2024-07-22 11:18:22.828144] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:21:17.869 [2024-07-22 11:18:22.828442] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:21:17.869 [2024-07-22 11:18:22.836121] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 01:21:17.869 11:18:22 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:17.869 11:18:22 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 01:21:17.869 11:18:22 keyring_file -- common/autotest_common.sh@648 -- # local es=0 01:21:17.869 11:18:22 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 01:21:17.869 11:18:22 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:21:17.869 11:18:22 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:21:17.869 11:18:22 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:21:17.869 11:18:22 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:21:17.869 11:18:22 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 01:21:17.869 11:18:22 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:17.869 11:18:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:21:17.869 [2024-07-22 11:18:22.852098] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 01:21:17.869 request: 01:21:17.869 { 01:21:17.869 "nqn": "nqn.2016-06.io.spdk:cnode0", 01:21:17.869 "secure_channel": false, 01:21:17.869 "listen_address": { 01:21:17.869 "trtype": "tcp", 01:21:17.869 "traddr": "127.0.0.1", 01:21:17.869 "trsvcid": "4420" 01:21:17.869 }, 01:21:17.869 "method": "nvmf_subsystem_add_listener", 01:21:17.869 "req_id": 1 01:21:17.869 } 01:21:17.869 Got JSON-RPC error response 01:21:17.869 response: 01:21:17.869 { 01:21:17.869 "code": -32602, 01:21:17.869 "message": "Invalid parameters" 01:21:17.869 } 01:21:17.869 11:18:22 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:21:17.869 11:18:22 keyring_file -- common/autotest_common.sh@651 -- # es=1 01:21:17.869 11:18:22 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:21:17.869 11:18:22 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:21:17.869 11:18:22 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:21:17.869 11:18:22 keyring_file -- keyring/file.sh@46 -- # bperfpid=99924 01:21:17.869 11:18:22 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 01:21:17.869 11:18:22 keyring_file -- keyring/file.sh@48 -- # waitforlisten 99924 /var/tmp/bperf.sock 01:21:17.869 11:18:22 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 99924 ']' 01:21:17.869 11:18:22 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 01:21:17.869 11:18:22 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 01:21:17.869 11:18:22 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:21:17.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:21:17.869 11:18:22 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 01:21:17.869 11:18:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:21:17.869 [2024-07-22 11:18:22.913413] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:21:17.869 [2024-07-22 11:18:22.913596] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99924 ] 01:21:17.869 [2024-07-22 11:18:23.052790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:21:18.126 [2024-07-22 11:18:23.095169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:21:18.126 [2024-07-22 11:18:23.136834] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:21:18.691 11:18:23 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:21:18.691 11:18:23 keyring_file -- common/autotest_common.sh@862 -- # return 0 01:21:18.691 11:18:23 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FghGm9UmS7 01:21:18.691 11:18:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FghGm9UmS7 01:21:18.949 11:18:23 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.yZJ7PPWjnV 01:21:18.949 11:18:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.yZJ7PPWjnV 01:21:18.949 11:18:24 keyring_file -- keyring/file.sh@51 -- # get_key key0 01:21:18.949 11:18:24 keyring_file -- keyring/file.sh@51 -- # jq -r .path 01:21:18.949 11:18:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:21:18.949 11:18:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:21:18.949 11:18:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:21:19.240 11:18:24 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.FghGm9UmS7 == \/\t\m\p\/\t\m\p\.\F\g\h\G\m\9\U\m\S\7 ]] 01:21:19.240 11:18:24 keyring_file -- keyring/file.sh@52 -- # jq -r .path 01:21:19.240 11:18:24 keyring_file -- keyring/file.sh@52 -- # get_key key1 01:21:19.240 11:18:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:21:19.240 11:18:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:21:19.240 11:18:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:21:19.498 11:18:24 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.yZJ7PPWjnV == \/\t\m\p\/\t\m\p\.\y\Z\J\7\P\P\W\j\n\V ]] 01:21:19.498 11:18:24 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 01:21:19.498 11:18:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:21:19.498 11:18:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:21:19.498 11:18:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:21:19.498 11:18:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:21:19.498 11:18:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:21:19.756 11:18:24 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 01:21:19.756 11:18:24 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 01:21:19.756 11:18:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:21:19.756 11:18:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:21:19.756 11:18:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:21:19.756 11:18:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:21:19.756 11:18:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:21:20.014 11:18:24 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 01:21:20.014 11:18:24 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:21:20.014 11:18:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:21:20.014 [2024-07-22 11:18:25.142883] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:21:20.014 nvme0n1 01:21:20.273 11:18:25 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 01:21:20.273 11:18:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:21:20.273 11:18:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:21:20.273 11:18:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:21:20.273 11:18:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:21:20.273 11:18:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:21:20.273 11:18:25 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 01:21:20.273 11:18:25 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 01:21:20.273 11:18:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:21:20.273 11:18:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:21:20.273 11:18:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:21:20.273 11:18:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:21:20.273 11:18:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:21:20.866 11:18:25 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 01:21:20.866 11:18:25 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:21:20.866 Running I/O for 1 seconds... 01:21:21.824 01:21:21.824 Latency(us) 01:21:21.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:21:21.824 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 01:21:21.824 nvme0n1 : 1.00 16443.07 64.23 0.00 0.00 7763.41 4605.94 17476.27 01:21:21.824 =================================================================================================================== 01:21:21.824 Total : 16443.07 64.23 0.00 0.00 7763.41 4605.94 17476.27 01:21:21.824 0 01:21:21.824 11:18:26 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 01:21:21.824 11:18:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 01:21:22.082 11:18:27 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 01:21:22.082 11:18:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:21:22.082 11:18:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:21:22.082 11:18:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:21:22.082 11:18:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:21:22.082 11:18:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:21:22.082 11:18:27 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 01:21:22.082 11:18:27 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 01:21:22.082 11:18:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:21:22.082 11:18:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:21:22.082 11:18:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:21:22.082 11:18:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:21:22.082 11:18:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:21:22.339 11:18:27 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 01:21:22.339 11:18:27 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:21:22.339 11:18:27 keyring_file -- common/autotest_common.sh@648 -- # local es=0 01:21:22.339 11:18:27 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:21:22.339 11:18:27 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 01:21:22.339 11:18:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:21:22.339 11:18:27 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 01:21:22.339 11:18:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:21:22.339 11:18:27 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:21:22.339 11:18:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:21:22.597 [2024-07-22 11:18:27.703277] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:21:22.597 [2024-07-22 11:18:27.703785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d4bdd0 (107): Transport endpoint is not connected 01:21:22.597 [2024-07-22 11:18:27.704774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d4bdd0 (9): Bad file descriptor 01:21:22.597 [2024-07-22 11:18:27.705771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:21:22.597 [2024-07-22 11:18:27.705786] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 01:21:22.597 [2024-07-22 11:18:27.705795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:21:22.597 request: 01:21:22.597 { 01:21:22.597 "name": "nvme0", 01:21:22.597 "trtype": "tcp", 01:21:22.597 "traddr": "127.0.0.1", 01:21:22.597 "adrfam": "ipv4", 01:21:22.597 "trsvcid": "4420", 01:21:22.597 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:21:22.597 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:21:22.597 "prchk_reftag": false, 01:21:22.597 "prchk_guard": false, 01:21:22.597 "hdgst": false, 01:21:22.597 "ddgst": false, 01:21:22.597 "psk": "key1", 01:21:22.597 "method": "bdev_nvme_attach_controller", 01:21:22.597 "req_id": 1 01:21:22.597 } 01:21:22.597 Got JSON-RPC error response 01:21:22.597 response: 01:21:22.597 { 01:21:22.597 "code": -5, 01:21:22.597 "message": "Input/output error" 01:21:22.597 } 01:21:22.597 11:18:27 keyring_file -- common/autotest_common.sh@651 -- # es=1 01:21:22.597 11:18:27 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:21:22.597 11:18:27 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:21:22.597 11:18:27 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:21:22.597 11:18:27 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 01:21:22.597 11:18:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:21:22.597 11:18:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:21:22.597 11:18:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:21:22.597 11:18:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:21:22.597 11:18:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:21:22.855 11:18:27 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 01:21:22.855 11:18:27 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 01:21:22.855 11:18:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:21:22.855 11:18:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:21:22.855 11:18:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:21:22.855 11:18:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:21:22.855 11:18:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:21:23.112 11:18:28 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 01:21:23.112 11:18:28 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 01:21:23.112 11:18:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 01:21:23.368 11:18:28 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 01:21:23.368 11:18:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 01:21:23.368 11:18:28 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 01:21:23.368 11:18:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:21:23.368 11:18:28 keyring_file -- keyring/file.sh@77 -- # jq length 01:21:23.626 11:18:28 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 01:21:23.626 11:18:28 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.FghGm9UmS7 01:21:23.626 11:18:28 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.FghGm9UmS7 01:21:23.626 11:18:28 keyring_file -- common/autotest_common.sh@648 -- # local es=0 01:21:23.626 11:18:28 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.FghGm9UmS7 01:21:23.626 11:18:28 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 01:21:23.626 11:18:28 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:21:23.626 11:18:28 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 01:21:23.626 11:18:28 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:21:23.626 11:18:28 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FghGm9UmS7 01:21:23.626 11:18:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FghGm9UmS7 01:21:23.883 [2024-07-22 11:18:28.926359] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.FghGm9UmS7': 0100660 01:21:23.883 [2024-07-22 11:18:28.926406] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 01:21:23.883 request: 01:21:23.883 { 01:21:23.883 "name": "key0", 01:21:23.883 "path": "/tmp/tmp.FghGm9UmS7", 01:21:23.883 "method": "keyring_file_add_key", 01:21:23.883 "req_id": 1 01:21:23.883 } 01:21:23.883 Got JSON-RPC error response 01:21:23.883 response: 01:21:23.883 { 01:21:23.883 "code": -1, 01:21:23.883 "message": "Operation not permitted" 01:21:23.883 } 01:21:23.883 11:18:28 keyring_file -- common/autotest_common.sh@651 -- # es=1 01:21:23.883 11:18:28 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:21:23.883 11:18:28 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:21:23.884 11:18:28 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:21:23.884 11:18:28 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.FghGm9UmS7 01:21:23.884 11:18:28 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FghGm9UmS7 01:21:23.884 11:18:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FghGm9UmS7 01:21:24.141 11:18:29 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.FghGm9UmS7 01:21:24.141 11:18:29 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 01:21:24.141 11:18:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:21:24.141 11:18:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:21:24.141 11:18:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:21:24.141 11:18:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:21:24.141 11:18:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:21:24.399 11:18:29 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 01:21:24.399 11:18:29 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:21:24.399 11:18:29 keyring_file -- common/autotest_common.sh@648 -- # local es=0 01:21:24.399 11:18:29 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:21:24.399 11:18:29 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 01:21:24.399 11:18:29 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:21:24.399 11:18:29 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 01:21:24.399 11:18:29 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:21:24.399 11:18:29 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:21:24.399 11:18:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:21:24.399 [2024-07-22 11:18:29.569483] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.FghGm9UmS7': No such file or directory 01:21:24.399 [2024-07-22 11:18:29.569787] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 01:21:24.399 [2024-07-22 11:18:29.569919] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 01:21:24.399 [2024-07-22 11:18:29.569951] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:21:24.399 [2024-07-22 11:18:29.570042] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 01:21:24.399 request: 01:21:24.399 { 01:21:24.399 "name": "nvme0", 01:21:24.399 "trtype": "tcp", 01:21:24.399 "traddr": "127.0.0.1", 01:21:24.399 "adrfam": "ipv4", 01:21:24.399 "trsvcid": "4420", 01:21:24.399 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:21:24.399 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:21:24.399 "prchk_reftag": false, 01:21:24.399 "prchk_guard": false, 01:21:24.399 "hdgst": false, 01:21:24.399 "ddgst": false, 01:21:24.399 "psk": "key0", 01:21:24.399 "method": "bdev_nvme_attach_controller", 01:21:24.399 "req_id": 1 01:21:24.399 } 01:21:24.399 Got JSON-RPC error response 01:21:24.399 response: 01:21:24.399 { 01:21:24.399 "code": -19, 01:21:24.399 "message": "No such device" 01:21:24.399 } 01:21:24.399 11:18:29 keyring_file -- common/autotest_common.sh@651 -- # es=1 01:21:24.399 11:18:29 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:21:24.399 11:18:29 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:21:24.399 11:18:29 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:21:24.399 11:18:29 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 01:21:24.399 11:18:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 01:21:24.658 11:18:29 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 01:21:24.658 11:18:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 01:21:24.658 11:18:29 keyring_file -- keyring/common.sh@17 -- # name=key0 01:21:24.658 11:18:29 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 01:21:24.658 11:18:29 keyring_file -- keyring/common.sh@17 -- # digest=0 01:21:24.658 11:18:29 keyring_file -- keyring/common.sh@18 -- # mktemp 01:21:24.658 11:18:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.aHSRHQyzUZ 01:21:24.658 11:18:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 01:21:24.658 11:18:29 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 01:21:24.658 11:18:29 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 01:21:24.658 11:18:29 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 01:21:24.658 11:18:29 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 01:21:24.658 11:18:29 keyring_file -- nvmf/common.sh@704 -- # digest=0 01:21:24.658 11:18:29 keyring_file -- nvmf/common.sh@705 -- # python - 01:21:24.658 11:18:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.aHSRHQyzUZ 01:21:24.658 11:18:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.aHSRHQyzUZ 01:21:24.658 11:18:29 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.aHSRHQyzUZ 01:21:24.658 11:18:29 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.aHSRHQyzUZ 01:21:24.658 11:18:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.aHSRHQyzUZ 01:21:24.916 11:18:30 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:21:24.916 11:18:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:21:25.173 nvme0n1 01:21:25.173 11:18:30 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 01:21:25.173 11:18:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:21:25.173 11:18:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:21:25.173 11:18:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:21:25.173 11:18:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:21:25.173 11:18:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:21:25.431 11:18:30 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 01:21:25.431 11:18:30 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 01:21:25.431 11:18:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 01:21:25.690 11:18:30 keyring_file -- keyring/file.sh@101 -- # get_key key0 01:21:25.690 11:18:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:21:25.690 11:18:30 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 01:21:25.690 11:18:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:21:25.690 11:18:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:21:25.947 11:18:30 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 01:21:25.947 11:18:30 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 01:21:25.947 11:18:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:21:25.947 11:18:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:21:25.947 11:18:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:21:25.947 11:18:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:21:25.947 11:18:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:21:25.947 11:18:31 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 01:21:25.947 11:18:31 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 01:21:25.947 11:18:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 01:21:26.513 11:18:31 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 01:21:26.513 11:18:31 keyring_file -- keyring/file.sh@104 -- # jq length 01:21:26.513 11:18:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:21:26.513 11:18:31 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 01:21:26.513 11:18:31 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.aHSRHQyzUZ 01:21:26.513 11:18:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.aHSRHQyzUZ 01:21:26.772 11:18:31 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.yZJ7PPWjnV 01:21:26.772 11:18:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.yZJ7PPWjnV 01:21:27.031 11:18:32 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:21:27.031 11:18:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:21:27.290 nvme0n1 01:21:27.290 11:18:32 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 01:21:27.290 11:18:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 01:21:27.548 11:18:32 keyring_file -- keyring/file.sh@112 -- # config='{ 01:21:27.548 "subsystems": [ 01:21:27.548 { 01:21:27.548 "subsystem": "keyring", 01:21:27.548 "config": [ 01:21:27.548 { 01:21:27.548 "method": "keyring_file_add_key", 01:21:27.548 "params": { 01:21:27.548 "name": "key0", 01:21:27.548 "path": "/tmp/tmp.aHSRHQyzUZ" 01:21:27.548 } 01:21:27.548 }, 01:21:27.548 { 01:21:27.548 "method": "keyring_file_add_key", 01:21:27.548 "params": { 01:21:27.548 "name": "key1", 01:21:27.548 "path": "/tmp/tmp.yZJ7PPWjnV" 01:21:27.548 } 01:21:27.548 } 01:21:27.548 ] 01:21:27.548 }, 01:21:27.548 { 01:21:27.548 "subsystem": "iobuf", 01:21:27.548 "config": [ 01:21:27.548 { 01:21:27.548 "method": "iobuf_set_options", 01:21:27.548 "params": { 01:21:27.548 "small_pool_count": 8192, 01:21:27.548 "large_pool_count": 1024, 01:21:27.548 "small_bufsize": 8192, 01:21:27.548 "large_bufsize": 135168 01:21:27.548 } 01:21:27.548 } 01:21:27.548 ] 01:21:27.548 }, 01:21:27.548 { 01:21:27.548 "subsystem": "sock", 01:21:27.548 "config": [ 01:21:27.548 { 01:21:27.548 "method": "sock_set_default_impl", 01:21:27.548 "params": { 01:21:27.548 "impl_name": "uring" 01:21:27.548 } 01:21:27.548 }, 01:21:27.548 { 01:21:27.548 "method": "sock_impl_set_options", 01:21:27.548 "params": { 01:21:27.548 "impl_name": "ssl", 01:21:27.548 "recv_buf_size": 4096, 01:21:27.548 "send_buf_size": 4096, 01:21:27.548 "enable_recv_pipe": true, 01:21:27.548 "enable_quickack": false, 01:21:27.548 "enable_placement_id": 0, 01:21:27.548 "enable_zerocopy_send_server": true, 01:21:27.548 "enable_zerocopy_send_client": false, 01:21:27.548 "zerocopy_threshold": 0, 01:21:27.548 "tls_version": 0, 01:21:27.548 "enable_ktls": false 01:21:27.548 } 01:21:27.548 }, 01:21:27.548 { 01:21:27.548 "method": "sock_impl_set_options", 01:21:27.548 "params": { 01:21:27.548 "impl_name": "posix", 01:21:27.548 "recv_buf_size": 2097152, 01:21:27.548 "send_buf_size": 2097152, 01:21:27.548 "enable_recv_pipe": true, 01:21:27.548 "enable_quickack": false, 01:21:27.548 "enable_placement_id": 0, 01:21:27.548 "enable_zerocopy_send_server": true, 01:21:27.548 "enable_zerocopy_send_client": false, 01:21:27.548 "zerocopy_threshold": 0, 01:21:27.548 "tls_version": 0, 01:21:27.548 "enable_ktls": false 01:21:27.548 } 01:21:27.548 }, 01:21:27.548 { 01:21:27.548 "method": "sock_impl_set_options", 01:21:27.548 "params": { 01:21:27.548 "impl_name": "uring", 01:21:27.548 "recv_buf_size": 2097152, 01:21:27.548 "send_buf_size": 2097152, 01:21:27.548 "enable_recv_pipe": true, 01:21:27.548 "enable_quickack": false, 01:21:27.548 "enable_placement_id": 0, 01:21:27.548 "enable_zerocopy_send_server": false, 01:21:27.548 "enable_zerocopy_send_client": false, 01:21:27.548 "zerocopy_threshold": 0, 01:21:27.548 "tls_version": 0, 01:21:27.548 "enable_ktls": false 01:21:27.548 } 01:21:27.548 } 01:21:27.548 ] 01:21:27.548 }, 01:21:27.548 { 01:21:27.548 "subsystem": "vmd", 01:21:27.548 "config": [] 01:21:27.548 }, 01:21:27.548 { 01:21:27.548 "subsystem": "accel", 01:21:27.548 "config": [ 01:21:27.548 { 01:21:27.548 "method": "accel_set_options", 01:21:27.548 "params": { 01:21:27.548 "small_cache_size": 128, 01:21:27.548 "large_cache_size": 16, 01:21:27.548 "task_count": 2048, 01:21:27.548 "sequence_count": 2048, 01:21:27.548 "buf_count": 2048 01:21:27.548 } 01:21:27.548 } 01:21:27.548 ] 01:21:27.548 }, 01:21:27.548 { 01:21:27.548 "subsystem": "bdev", 01:21:27.548 "config": [ 01:21:27.548 { 01:21:27.548 "method": "bdev_set_options", 01:21:27.548 "params": { 01:21:27.548 "bdev_io_pool_size": 65535, 01:21:27.548 "bdev_io_cache_size": 256, 01:21:27.548 "bdev_auto_examine": true, 01:21:27.548 "iobuf_small_cache_size": 128, 01:21:27.548 "iobuf_large_cache_size": 16 01:21:27.548 } 01:21:27.548 }, 01:21:27.548 { 01:21:27.548 "method": "bdev_raid_set_options", 01:21:27.548 "params": { 01:21:27.548 "process_window_size_kb": 1024, 01:21:27.548 "process_max_bandwidth_mb_sec": 0 01:21:27.548 } 01:21:27.548 }, 01:21:27.548 { 01:21:27.548 "method": "bdev_iscsi_set_options", 01:21:27.548 "params": { 01:21:27.548 "timeout_sec": 30 01:21:27.548 } 01:21:27.548 }, 01:21:27.548 { 01:21:27.548 "method": "bdev_nvme_set_options", 01:21:27.548 "params": { 01:21:27.548 "action_on_timeout": "none", 01:21:27.548 "timeout_us": 0, 01:21:27.548 "timeout_admin_us": 0, 01:21:27.548 "keep_alive_timeout_ms": 10000, 01:21:27.548 "arbitration_burst": 0, 01:21:27.548 "low_priority_weight": 0, 01:21:27.548 "medium_priority_weight": 0, 01:21:27.548 "high_priority_weight": 0, 01:21:27.548 "nvme_adminq_poll_period_us": 10000, 01:21:27.548 "nvme_ioq_poll_period_us": 0, 01:21:27.548 "io_queue_requests": 512, 01:21:27.548 "delay_cmd_submit": true, 01:21:27.548 "transport_retry_count": 4, 01:21:27.548 "bdev_retry_count": 3, 01:21:27.548 "transport_ack_timeout": 0, 01:21:27.548 "ctrlr_loss_timeout_sec": 0, 01:21:27.548 "reconnect_delay_sec": 0, 01:21:27.548 "fast_io_fail_timeout_sec": 0, 01:21:27.548 "disable_auto_failback": false, 01:21:27.548 "generate_uuids": false, 01:21:27.548 "transport_tos": 0, 01:21:27.548 "nvme_error_stat": false, 01:21:27.548 "rdma_srq_size": 0, 01:21:27.548 "io_path_stat": false, 01:21:27.548 "allow_accel_sequence": false, 01:21:27.548 "rdma_max_cq_size": 0, 01:21:27.548 "rdma_cm_event_timeout_ms": 0, 01:21:27.548 "dhchap_digests": [ 01:21:27.548 "sha256", 01:21:27.548 "sha384", 01:21:27.548 "sha512" 01:21:27.548 ], 01:21:27.548 "dhchap_dhgroups": [ 01:21:27.548 "null", 01:21:27.548 "ffdhe2048", 01:21:27.548 "ffdhe3072", 01:21:27.548 "ffdhe4096", 01:21:27.548 "ffdhe6144", 01:21:27.548 "ffdhe8192" 01:21:27.548 ] 01:21:27.548 } 01:21:27.548 }, 01:21:27.548 { 01:21:27.548 "method": "bdev_nvme_attach_controller", 01:21:27.548 "params": { 01:21:27.548 "name": "nvme0", 01:21:27.548 "trtype": "TCP", 01:21:27.548 "adrfam": "IPv4", 01:21:27.548 "traddr": "127.0.0.1", 01:21:27.548 "trsvcid": "4420", 01:21:27.549 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:21:27.549 "prchk_reftag": false, 01:21:27.549 "prchk_guard": false, 01:21:27.549 "ctrlr_loss_timeout_sec": 0, 01:21:27.549 "reconnect_delay_sec": 0, 01:21:27.549 "fast_io_fail_timeout_sec": 0, 01:21:27.549 "psk": "key0", 01:21:27.549 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:21:27.549 "hdgst": false, 01:21:27.549 "ddgst": false 01:21:27.549 } 01:21:27.549 }, 01:21:27.549 { 01:21:27.549 "method": "bdev_nvme_set_hotplug", 01:21:27.549 "params": { 01:21:27.549 "period_us": 100000, 01:21:27.549 "enable": false 01:21:27.549 } 01:21:27.549 }, 01:21:27.549 { 01:21:27.549 "method": "bdev_wait_for_examine" 01:21:27.549 } 01:21:27.549 ] 01:21:27.549 }, 01:21:27.549 { 01:21:27.549 "subsystem": "nbd", 01:21:27.549 "config": [] 01:21:27.549 } 01:21:27.549 ] 01:21:27.549 }' 01:21:27.549 11:18:32 keyring_file -- keyring/file.sh@114 -- # killprocess 99924 01:21:27.549 11:18:32 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 99924 ']' 01:21:27.549 11:18:32 keyring_file -- common/autotest_common.sh@952 -- # kill -0 99924 01:21:27.549 11:18:32 keyring_file -- common/autotest_common.sh@953 -- # uname 01:21:27.549 11:18:32 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:21:27.549 11:18:32 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99924 01:21:27.549 killing process with pid 99924 01:21:27.549 Received shutdown signal, test time was about 1.000000 seconds 01:21:27.549 01:21:27.549 Latency(us) 01:21:27.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:21:27.549 =================================================================================================================== 01:21:27.549 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:21:27.549 11:18:32 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:21:27.549 11:18:32 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:21:27.549 11:18:32 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99924' 01:21:27.549 11:18:32 keyring_file -- common/autotest_common.sh@967 -- # kill 99924 01:21:27.549 11:18:32 keyring_file -- common/autotest_common.sh@972 -- # wait 99924 01:21:27.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:21:27.807 11:18:32 keyring_file -- keyring/file.sh@117 -- # bperfpid=100158 01:21:27.807 11:18:32 keyring_file -- keyring/file.sh@119 -- # waitforlisten 100158 /var/tmp/bperf.sock 01:21:27.807 11:18:32 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 100158 ']' 01:21:27.807 11:18:32 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 01:21:27.807 11:18:32 keyring_file -- keyring/file.sh@115 -- # echo '{ 01:21:27.807 "subsystems": [ 01:21:27.807 { 01:21:27.807 "subsystem": "keyring", 01:21:27.807 "config": [ 01:21:27.807 { 01:21:27.807 "method": "keyring_file_add_key", 01:21:27.807 "params": { 01:21:27.807 "name": "key0", 01:21:27.807 "path": "/tmp/tmp.aHSRHQyzUZ" 01:21:27.807 } 01:21:27.807 }, 01:21:27.807 { 01:21:27.807 "method": "keyring_file_add_key", 01:21:27.807 "params": { 01:21:27.807 "name": "key1", 01:21:27.807 "path": "/tmp/tmp.yZJ7PPWjnV" 01:21:27.807 } 01:21:27.807 } 01:21:27.807 ] 01:21:27.807 }, 01:21:27.807 { 01:21:27.807 "subsystem": "iobuf", 01:21:27.807 "config": [ 01:21:27.807 { 01:21:27.807 "method": "iobuf_set_options", 01:21:27.807 "params": { 01:21:27.808 "small_pool_count": 8192, 01:21:27.808 "large_pool_count": 1024, 01:21:27.808 "small_bufsize": 8192, 01:21:27.808 "large_bufsize": 135168 01:21:27.808 } 01:21:27.808 } 01:21:27.808 ] 01:21:27.808 }, 01:21:27.808 { 01:21:27.808 "subsystem": "sock", 01:21:27.808 "config": [ 01:21:27.808 { 01:21:27.808 "method": "sock_set_default_impl", 01:21:27.808 "params": { 01:21:27.808 "impl_name": "uring" 01:21:27.808 } 01:21:27.808 }, 01:21:27.808 { 01:21:27.808 "method": "sock_impl_set_options", 01:21:27.808 "params": { 01:21:27.808 "impl_name": "ssl", 01:21:27.808 "recv_buf_size": 4096, 01:21:27.808 "send_buf_size": 4096, 01:21:27.808 "enable_recv_pipe": true, 01:21:27.808 "enable_quickack": false, 01:21:27.808 "enable_placement_id": 0, 01:21:27.808 "enable_zerocopy_send_server": true, 01:21:27.808 "enable_zerocopy_send_client": false, 01:21:27.808 "zerocopy_threshold": 0, 01:21:27.808 "tls_version": 0, 01:21:27.808 "enable_ktls": false 01:21:27.808 } 01:21:27.808 }, 01:21:27.808 { 01:21:27.808 "method": "sock_impl_set_options", 01:21:27.808 "params": { 01:21:27.808 "impl_name": "posix", 01:21:27.808 "recv_buf_size": 2097152, 01:21:27.808 "send_buf_size": 2097152, 01:21:27.808 "enable_recv_pipe": true, 01:21:27.808 "enable_quickack": false, 01:21:27.808 "enable_placement_id": 0, 01:21:27.808 "enable_zerocopy_send_server": true, 01:21:27.808 "enable_zerocopy_send_client": false, 01:21:27.808 "zerocopy_threshold": 0, 01:21:27.808 "tls_version": 0, 01:21:27.808 "enable_ktls": false 01:21:27.808 } 01:21:27.808 }, 01:21:27.808 { 01:21:27.808 "method": "sock_impl_set_options", 01:21:27.808 "params": { 01:21:27.808 "impl_name": "uring", 01:21:27.808 "recv_buf_size": 2097152, 01:21:27.808 "send_buf_size": 2097152, 01:21:27.808 "enable_recv_pipe": true, 01:21:27.808 "enable_quickack": false, 01:21:27.808 "enable_placement_id": 0, 01:21:27.808 "enable_zerocopy_send_server": false, 01:21:27.808 "enable_zerocopy_send_client": false, 01:21:27.808 "zerocopy_threshold": 0, 01:21:27.808 "tls_version": 0, 01:21:27.808 "enable_ktls": false 01:21:27.808 } 01:21:27.808 } 01:21:27.808 ] 01:21:27.808 }, 01:21:27.808 { 01:21:27.808 "subsystem": "vmd", 01:21:27.808 "config": [] 01:21:27.808 }, 01:21:27.808 { 01:21:27.808 "subsystem": "accel", 01:21:27.808 "config": [ 01:21:27.808 { 01:21:27.808 "method": "accel_set_options", 01:21:27.808 "params": { 01:21:27.808 "small_cache_size": 128, 01:21:27.808 "large_cache_size": 16, 01:21:27.808 "task_count": 2048, 01:21:27.808 "sequence_count": 2048, 01:21:27.808 "buf_count": 2048 01:21:27.808 } 01:21:27.808 } 01:21:27.808 ] 01:21:27.808 }, 01:21:27.808 { 01:21:27.808 "subsystem": "bdev", 01:21:27.808 "config": [ 01:21:27.808 { 01:21:27.808 "method": "bdev_set_options", 01:21:27.808 "params": { 01:21:27.808 "bdev_io_pool_size": 65535, 01:21:27.808 "bdev_io_cache_size": 256, 01:21:27.808 "bdev_auto_examine": true, 01:21:27.808 "iobuf_small_cache_size": 128, 01:21:27.808 "iobuf_large_cache_size": 16 01:21:27.808 } 01:21:27.808 }, 01:21:27.808 { 01:21:27.808 "method": "bdev_raid_set_options", 01:21:27.808 "params": { 01:21:27.808 "process_window_size_kb": 1024, 01:21:27.808 "process_max_bandwidth_mb_sec": 0 01:21:27.808 } 01:21:27.808 }, 01:21:27.808 { 01:21:27.808 "method": "bdev_iscsi_set_options", 01:21:27.808 "params": { 01:21:27.808 "timeout_sec": 30 01:21:27.808 } 01:21:27.808 }, 01:21:27.808 { 01:21:27.808 "method": "bdev_nvme_set_options", 01:21:27.808 "params": { 01:21:27.808 "action_on_timeout": "none", 01:21:27.808 "timeout_us": 0, 01:21:27.808 "timeout_admin_us": 0, 01:21:27.808 "keep_alive_timeout_ms": 10000, 01:21:27.808 "arbitration_burst": 0, 01:21:27.808 "low_priority_weight": 0, 01:21:27.808 "medium_priority_weight": 0, 01:21:27.808 "high_priority_weight": 0, 01:21:27.808 "nvme_adminq_poll_period_us": 10000, 01:21:27.808 "nvme_ioq_poll_period_us": 0, 01:21:27.808 "io_queue_requests": 512, 01:21:27.808 "delay_cmd_submit": true, 01:21:27.808 "transport_retry_count": 4, 01:21:27.808 "bdev_retry_count": 3, 01:21:27.808 "transport_ack_timeout": 0, 01:21:27.808 "ctrlr_loss_timeout_sec": 0, 01:21:27.808 "reconnect_delay_sec": 0, 01:21:27.808 "fast_io_fail_timeout_sec": 0, 01:21:27.808 "disable_auto_failback": false, 01:21:27.808 "generate_uuids": false, 01:21:27.808 "transport_tos": 0, 01:21:27.808 "nvme_error_stat": false, 01:21:27.808 "rdma_srq_size": 0, 01:21:27.808 "io_path_stat": false, 01:21:27.808 "allow_accel_sequence": false, 01:21:27.808 "rdma_max_cq_size": 0, 01:21:27.808 "rdma_cm_event_timeout_ms": 0, 01:21:27.808 "dhchap_digests": [ 01:21:27.808 "sha256", 01:21:27.808 "sha384", 01:21:27.808 "sha512" 01:21:27.808 ], 01:21:27.808 "dhchap_dhgroups": [ 01:21:27.808 "null", 01:21:27.808 "ffdhe2048", 01:21:27.808 "ffdhe3072", 01:21:27.808 "ffdhe4096", 01:21:27.808 "ffdhe6144", 01:21:27.808 "ffdhe8192" 01:21:27.808 ] 01:21:27.808 } 01:21:27.808 }, 01:21:27.808 { 01:21:27.808 "method": "bdev_nvme_attach_controller", 01:21:27.808 "params": { 01:21:27.808 "name": "nvme0", 01:21:27.808 "trtype": "TCP", 01:21:27.808 "adrfam": "IPv4", 01:21:27.808 "traddr": "127.0.0.1", 01:21:27.808 "trsvcid": "4420", 01:21:27.808 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:21:27.808 "prchk_reftag": false, 01:21:27.808 "prchk_guard": false, 01:21:27.808 "ctrlr_loss_timeout_sec": 0, 01:21:27.808 "reconnect_delay_sec": 0, 01:21:27.808 "fast_io_fail_timeout_sec": 0, 01:21:27.808 "psk": "key0", 01:21:27.808 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:21:27.808 "hdgst": false, 01:21:27.808 "ddgst": false 01:21:27.808 } 01:21:27.808 }, 01:21:27.808 { 01:21:27.808 "method": "bdev_nvme_set_hotplug", 01:21:27.808 "params": { 01:21:27.808 "period_us": 100000, 01:21:27.808 "enable": false 01:21:27.808 } 01:21:27.808 }, 01:21:27.808 { 01:21:27.808 "method": "bdev_wait_for_examine" 01:21:27.808 } 01:21:27.808 ] 01:21:27.808 }, 01:21:27.808 { 01:21:27.808 "subsystem": "nbd", 01:21:27.808 "config": [] 01:21:27.808 } 01:21:27.808 ] 01:21:27.808 }' 01:21:27.808 11:18:32 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 01:21:27.808 11:18:32 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 01:21:27.808 11:18:32 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:21:27.808 11:18:32 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 01:21:27.808 11:18:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:21:27.808 [2024-07-22 11:18:32.821268] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:21:27.808 [2024-07-22 11:18:32.821491] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100158 ] 01:21:27.808 [2024-07-22 11:18:32.963549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:21:27.808 [2024-07-22 11:18:33.006639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:21:28.069 [2024-07-22 11:18:33.129245] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:21:28.069 [2024-07-22 11:18:33.171151] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:21:28.636 11:18:33 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:21:28.636 11:18:33 keyring_file -- common/autotest_common.sh@862 -- # return 0 01:21:28.636 11:18:33 keyring_file -- keyring/file.sh@120 -- # jq length 01:21:28.636 11:18:33 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 01:21:28.636 11:18:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:21:28.894 11:18:33 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 01:21:28.894 11:18:33 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 01:21:28.894 11:18:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:21:28.894 11:18:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:21:28.894 11:18:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:21:28.894 11:18:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:21:28.894 11:18:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:21:28.894 11:18:34 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 01:21:28.894 11:18:34 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 01:21:28.894 11:18:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:21:28.894 11:18:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:21:28.895 11:18:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:21:28.895 11:18:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:21:28.895 11:18:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:21:29.153 11:18:34 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 01:21:29.153 11:18:34 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 01:21:29.153 11:18:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 01:21:29.153 11:18:34 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 01:21:29.412 11:18:34 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 01:21:29.412 11:18:34 keyring_file -- keyring/file.sh@1 -- # cleanup 01:21:29.412 11:18:34 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.aHSRHQyzUZ /tmp/tmp.yZJ7PPWjnV 01:21:29.412 11:18:34 keyring_file -- keyring/file.sh@20 -- # killprocess 100158 01:21:29.412 11:18:34 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 100158 ']' 01:21:29.412 11:18:34 keyring_file -- common/autotest_common.sh@952 -- # kill -0 100158 01:21:29.412 11:18:34 keyring_file -- common/autotest_common.sh@953 -- # uname 01:21:29.412 11:18:34 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:21:29.412 11:18:34 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100158 01:21:29.412 killing process with pid 100158 01:21:29.412 Received shutdown signal, test time was about 1.000000 seconds 01:21:29.412 01:21:29.412 Latency(us) 01:21:29.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:21:29.412 =================================================================================================================== 01:21:29.412 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:21:29.412 11:18:34 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:21:29.412 11:18:34 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:21:29.412 11:18:34 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100158' 01:21:29.412 11:18:34 keyring_file -- common/autotest_common.sh@967 -- # kill 100158 01:21:29.412 11:18:34 keyring_file -- common/autotest_common.sh@972 -- # wait 100158 01:21:29.671 11:18:34 keyring_file -- keyring/file.sh@21 -- # killprocess 99908 01:21:29.671 11:18:34 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 99908 ']' 01:21:29.671 11:18:34 keyring_file -- common/autotest_common.sh@952 -- # kill -0 99908 01:21:29.671 11:18:34 keyring_file -- common/autotest_common.sh@953 -- # uname 01:21:29.671 11:18:34 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:21:29.671 11:18:34 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99908 01:21:29.671 killing process with pid 99908 01:21:29.671 11:18:34 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:21:29.671 11:18:34 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:21:29.671 11:18:34 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99908' 01:21:29.671 11:18:34 keyring_file -- common/autotest_common.sh@967 -- # kill 99908 01:21:29.671 [2024-07-22 11:18:34.748956] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 01:21:29.671 11:18:34 keyring_file -- common/autotest_common.sh@972 -- # wait 99908 01:21:29.930 01:21:29.930 real 0m13.474s 01:21:29.930 user 0m32.236s 01:21:29.930 sys 0m3.192s 01:21:29.930 ************************************ 01:21:29.930 END TEST keyring_file 01:21:29.930 ************************************ 01:21:29.930 11:18:35 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 01:21:29.930 11:18:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:21:29.930 11:18:35 -- common/autotest_common.sh@1142 -- # return 0 01:21:29.930 11:18:35 -- spdk/autotest.sh@296 -- # [[ y == y ]] 01:21:29.930 11:18:35 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 01:21:29.930 11:18:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:21:29.930 11:18:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:21:29.930 11:18:35 -- common/autotest_common.sh@10 -- # set +x 01:21:29.930 ************************************ 01:21:29.930 START TEST keyring_linux 01:21:29.930 ************************************ 01:21:29.930 11:18:35 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 01:21:30.189 * Looking for test storage... 01:21:30.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 01:21:30.189 11:18:35 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 01:21:30.189 11:18:35 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:21:30.189 11:18:35 keyring_linux -- nvmf/common.sh@7 -- # uname -s 01:21:30.189 11:18:35 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:21:30.189 11:18:35 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:21:30.189 11:18:35 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:21:30.189 11:18:35 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:21:30.189 11:18:35 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:21:30.189 11:18:35 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:21:30.189 11:18:35 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:21:30.189 11:18:35 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:21:30.189 11:18:35 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:21:30.189 11:18:35 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:21:30.189 11:18:35 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7758934d-ca6b-403e-9e5d-3518ecb16acb 01:21:30.189 11:18:35 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=7758934d-ca6b-403e-9e5d-3518ecb16acb 01:21:30.189 11:18:35 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:21:30.189 11:18:35 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:21:30.189 11:18:35 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:21:30.189 11:18:35 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:21:30.189 11:18:35 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:21:30.189 11:18:35 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:21:30.189 11:18:35 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:21:30.189 11:18:35 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:21:30.190 11:18:35 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:30.190 11:18:35 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:30.190 11:18:35 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:30.190 11:18:35 keyring_linux -- paths/export.sh@5 -- # export PATH 01:21:30.190 11:18:35 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:30.190 11:18:35 keyring_linux -- nvmf/common.sh@47 -- # : 0 01:21:30.190 11:18:35 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:21:30.190 11:18:35 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:21:30.190 11:18:35 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:21:30.190 11:18:35 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:21:30.190 11:18:35 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:21:30.190 11:18:35 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:21:30.190 11:18:35 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:21:30.190 11:18:35 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 01:21:30.190 11:18:35 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 01:21:30.190 11:18:35 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 01:21:30.190 11:18:35 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 01:21:30.190 11:18:35 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 01:21:30.190 11:18:35 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 01:21:30.190 11:18:35 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 01:21:30.190 11:18:35 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 01:21:30.190 11:18:35 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 01:21:30.190 11:18:35 keyring_linux -- keyring/common.sh@17 -- # name=key0 01:21:30.190 11:18:35 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 01:21:30.190 11:18:35 keyring_linux -- keyring/common.sh@17 -- # digest=0 01:21:30.190 11:18:35 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 01:21:30.190 11:18:35 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 01:21:30.190 11:18:35 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 01:21:30.190 11:18:35 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 01:21:30.190 11:18:35 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 01:21:30.190 11:18:35 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 01:21:30.190 11:18:35 keyring_linux -- nvmf/common.sh@704 -- # digest=0 01:21:30.190 11:18:35 keyring_linux -- nvmf/common.sh@705 -- # python - 01:21:30.190 11:18:35 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 01:21:30.190 11:18:35 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 01:21:30.190 /tmp/:spdk-test:key0 01:21:30.190 11:18:35 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 01:21:30.190 11:18:35 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 01:21:30.190 11:18:35 keyring_linux -- keyring/common.sh@17 -- # name=key1 01:21:30.190 11:18:35 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 01:21:30.190 11:18:35 keyring_linux -- keyring/common.sh@17 -- # digest=0 01:21:30.190 11:18:35 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 01:21:30.190 11:18:35 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 01:21:30.190 11:18:35 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 01:21:30.190 11:18:35 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 01:21:30.190 11:18:35 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 01:21:30.190 11:18:35 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 01:21:30.190 11:18:35 keyring_linux -- nvmf/common.sh@704 -- # digest=0 01:21:30.190 11:18:35 keyring_linux -- nvmf/common.sh@705 -- # python - 01:21:30.190 11:18:35 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 01:21:30.449 11:18:35 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 01:21:30.449 /tmp/:spdk-test:key1 01:21:30.449 11:18:35 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:21:30.449 11:18:35 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=100265 01:21:30.449 11:18:35 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 100265 01:21:30.449 11:18:35 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 100265 ']' 01:21:30.449 11:18:35 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:21:30.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:21:30.449 11:18:35 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 01:21:30.449 11:18:35 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:21:30.449 11:18:35 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 01:21:30.449 11:18:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:21:30.449 [2024-07-22 11:18:35.440220] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:21:30.449 [2024-07-22 11:18:35.440283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100265 ] 01:21:30.449 [2024-07-22 11:18:35.581513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:21:30.449 [2024-07-22 11:18:35.634614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:21:30.707 [2024-07-22 11:18:35.687676] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:21:31.274 11:18:36 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:21:31.274 11:18:36 keyring_linux -- common/autotest_common.sh@862 -- # return 0 01:21:31.274 11:18:36 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 01:21:31.274 11:18:36 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:31.274 11:18:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:21:31.274 [2024-07-22 11:18:36.307252] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:21:31.274 null0 01:21:31.274 [2024-07-22 11:18:36.351165] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:21:31.274 [2024-07-22 11:18:36.351580] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:21:31.274 11:18:36 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:31.274 11:18:36 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 01:21:31.274 951626715 01:21:31.274 11:18:36 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 01:21:31.274 726035436 01:21:31.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:21:31.274 11:18:36 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=100283 01:21:31.274 11:18:36 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 01:21:31.274 11:18:36 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 100283 /var/tmp/bperf.sock 01:21:31.274 11:18:36 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 100283 ']' 01:21:31.274 11:18:36 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 01:21:31.274 11:18:36 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 01:21:31.274 11:18:36 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:21:31.274 11:18:36 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 01:21:31.274 11:18:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:21:31.274 [2024-07-22 11:18:36.425224] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:21:31.274 [2024-07-22 11:18:36.425561] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100283 ] 01:21:31.532 [2024-07-22 11:18:36.569394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:21:31.532 [2024-07-22 11:18:36.615227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:21:32.468 11:18:37 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:21:32.468 11:18:37 keyring_linux -- common/autotest_common.sh@862 -- # return 0 01:21:32.468 11:18:37 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 01:21:32.468 11:18:37 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 01:21:32.468 11:18:37 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 01:21:32.468 11:18:37 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:21:32.726 [2024-07-22 11:18:37.750299] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 01:21:32.726 11:18:37 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 01:21:32.726 11:18:37 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 01:21:32.984 [2024-07-22 11:18:37.960462] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:21:32.984 nvme0n1 01:21:32.984 11:18:38 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 01:21:32.984 11:18:38 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 01:21:32.984 11:18:38 keyring_linux -- keyring/linux.sh@20 -- # local sn 01:21:32.984 11:18:38 keyring_linux -- keyring/linux.sh@22 -- # jq length 01:21:32.984 11:18:38 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 01:21:32.984 11:18:38 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:21:33.242 11:18:38 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 01:21:33.242 11:18:38 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 01:21:33.242 11:18:38 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 01:21:33.242 11:18:38 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 01:21:33.242 11:18:38 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 01:21:33.242 11:18:38 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:21:33.242 11:18:38 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:21:33.500 11:18:38 keyring_linux -- keyring/linux.sh@25 -- # sn=951626715 01:21:33.500 11:18:38 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 01:21:33.500 11:18:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 01:21:33.500 11:18:38 keyring_linux -- keyring/linux.sh@26 -- # [[ 951626715 == \9\5\1\6\2\6\7\1\5 ]] 01:21:33.500 11:18:38 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 951626715 01:21:33.500 11:18:38 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 01:21:33.500 11:18:38 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:21:33.759 Running I/O for 1 seconds... 01:21:34.787 01:21:34.787 Latency(us) 01:21:34.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:21:34.787 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 01:21:34.787 nvme0n1 : 1.01 17919.54 70.00 0.00 0.00 7114.59 4105.87 10264.67 01:21:34.787 =================================================================================================================== 01:21:34.787 Total : 17919.54 70.00 0.00 0.00 7114.59 4105.87 10264.67 01:21:34.787 0 01:21:34.787 11:18:39 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 01:21:34.787 11:18:39 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 01:21:35.045 11:18:40 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 01:21:35.045 11:18:40 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 01:21:35.045 11:18:40 keyring_linux -- keyring/linux.sh@20 -- # local sn 01:21:35.045 11:18:40 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 01:21:35.045 11:18:40 keyring_linux -- keyring/linux.sh@22 -- # jq length 01:21:35.045 11:18:40 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:21:35.045 11:18:40 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 01:21:35.045 11:18:40 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 01:21:35.045 11:18:40 keyring_linux -- keyring/linux.sh@23 -- # return 01:21:35.045 11:18:40 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:21:35.045 11:18:40 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 01:21:35.045 11:18:40 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:21:35.045 11:18:40 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 01:21:35.045 11:18:40 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:21:35.045 11:18:40 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 01:21:35.045 11:18:40 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:21:35.045 11:18:40 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:21:35.045 11:18:40 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:21:35.305 [2024-07-22 11:18:40.406826] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:21:35.305 [2024-07-22 11:18:40.407414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1407d20 (107): Transport endpoint is not connected 01:21:35.305 [2024-07-22 11:18:40.408406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1407d20 (9): Bad file descriptor 01:21:35.305 [2024-07-22 11:18:40.409399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:21:35.305 [2024-07-22 11:18:40.409515] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 01:21:35.305 [2024-07-22 11:18:40.409630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:21:35.305 request: 01:21:35.305 { 01:21:35.305 "name": "nvme0", 01:21:35.305 "trtype": "tcp", 01:21:35.305 "traddr": "127.0.0.1", 01:21:35.305 "adrfam": "ipv4", 01:21:35.305 "trsvcid": "4420", 01:21:35.305 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:21:35.305 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:21:35.305 "prchk_reftag": false, 01:21:35.305 "prchk_guard": false, 01:21:35.305 "hdgst": false, 01:21:35.305 "ddgst": false, 01:21:35.305 "psk": ":spdk-test:key1", 01:21:35.305 "method": "bdev_nvme_attach_controller", 01:21:35.305 "req_id": 1 01:21:35.305 } 01:21:35.305 Got JSON-RPC error response 01:21:35.305 response: 01:21:35.305 { 01:21:35.305 "code": -5, 01:21:35.305 "message": "Input/output error" 01:21:35.305 } 01:21:35.305 11:18:40 keyring_linux -- common/autotest_common.sh@651 -- # es=1 01:21:35.305 11:18:40 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:21:35.305 11:18:40 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:21:35.305 11:18:40 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:21:35.305 11:18:40 keyring_linux -- keyring/linux.sh@1 -- # cleanup 01:21:35.305 11:18:40 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 01:21:35.305 11:18:40 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 01:21:35.305 11:18:40 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 01:21:35.305 11:18:40 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 01:21:35.305 11:18:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 01:21:35.305 11:18:40 keyring_linux -- keyring/linux.sh@33 -- # sn=951626715 01:21:35.305 11:18:40 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 951626715 01:21:35.305 1 links removed 01:21:35.305 11:18:40 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 01:21:35.305 11:18:40 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 01:21:35.305 11:18:40 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 01:21:35.305 11:18:40 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 01:21:35.305 11:18:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 01:21:35.305 11:18:40 keyring_linux -- keyring/linux.sh@33 -- # sn=726035436 01:21:35.305 11:18:40 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 726035436 01:21:35.305 1 links removed 01:21:35.305 11:18:40 keyring_linux -- keyring/linux.sh@41 -- # killprocess 100283 01:21:35.305 11:18:40 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 100283 ']' 01:21:35.305 11:18:40 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 100283 01:21:35.305 11:18:40 keyring_linux -- common/autotest_common.sh@953 -- # uname 01:21:35.305 11:18:40 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:21:35.305 11:18:40 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100283 01:21:35.305 killing process with pid 100283 01:21:35.305 Received shutdown signal, test time was about 1.000000 seconds 01:21:35.305 01:21:35.305 Latency(us) 01:21:35.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:21:35.305 =================================================================================================================== 01:21:35.305 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:21:35.305 11:18:40 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:21:35.305 11:18:40 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:21:35.305 11:18:40 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100283' 01:21:35.305 11:18:40 keyring_linux -- common/autotest_common.sh@967 -- # kill 100283 01:21:35.305 11:18:40 keyring_linux -- common/autotest_common.sh@972 -- # wait 100283 01:21:35.565 11:18:40 keyring_linux -- keyring/linux.sh@42 -- # killprocess 100265 01:21:35.565 11:18:40 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 100265 ']' 01:21:35.565 11:18:40 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 100265 01:21:35.565 11:18:40 keyring_linux -- common/autotest_common.sh@953 -- # uname 01:21:35.565 11:18:40 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:21:35.565 11:18:40 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100265 01:21:35.565 killing process with pid 100265 01:21:35.565 11:18:40 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:21:35.565 11:18:40 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:21:35.565 11:18:40 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100265' 01:21:35.565 11:18:40 keyring_linux -- common/autotest_common.sh@967 -- # kill 100265 01:21:35.565 11:18:40 keyring_linux -- common/autotest_common.sh@972 -- # wait 100265 01:21:35.839 01:21:35.839 real 0m5.885s 01:21:35.839 user 0m10.966s 01:21:35.839 sys 0m1.688s 01:21:35.839 11:18:41 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 01:21:35.839 11:18:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:21:35.839 ************************************ 01:21:35.839 END TEST keyring_linux 01:21:35.839 ************************************ 01:21:36.098 11:18:41 -- common/autotest_common.sh@1142 -- # return 0 01:21:36.098 11:18:41 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 01:21:36.098 11:18:41 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 01:21:36.098 11:18:41 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 01:21:36.098 11:18:41 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 01:21:36.098 11:18:41 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 01:21:36.098 11:18:41 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 01:21:36.098 11:18:41 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 01:21:36.098 11:18:41 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 01:21:36.098 11:18:41 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 01:21:36.098 11:18:41 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 01:21:36.098 11:18:41 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 01:21:36.098 11:18:41 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 01:21:36.098 11:18:41 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 01:21:36.098 11:18:41 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 01:21:36.098 11:18:41 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 01:21:36.098 11:18:41 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 01:21:36.098 11:18:41 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 01:21:36.098 11:18:41 -- common/autotest_common.sh@722 -- # xtrace_disable 01:21:36.098 11:18:41 -- common/autotest_common.sh@10 -- # set +x 01:21:36.098 11:18:41 -- spdk/autotest.sh@383 -- # autotest_cleanup 01:21:36.098 11:18:41 -- common/autotest_common.sh@1392 -- # local autotest_es=0 01:21:36.098 11:18:41 -- common/autotest_common.sh@1393 -- # xtrace_disable 01:21:36.098 11:18:41 -- common/autotest_common.sh@10 -- # set +x 01:21:38.637 INFO: APP EXITING 01:21:38.637 INFO: killing all VMs 01:21:38.637 INFO: killing vhost app 01:21:38.637 INFO: EXIT DONE 01:21:39.204 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:21:39.204 0000:00:11.0 (1b36 0010): Already using the nvme driver 01:21:39.204 0000:00:10.0 (1b36 0010): Already using the nvme driver 01:21:40.137 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:21:40.137 Cleaning 01:21:40.137 Removing: /var/run/dpdk/spdk0/config 01:21:40.137 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 01:21:40.137 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 01:21:40.137 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 01:21:40.137 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 01:21:40.137 Removing: /var/run/dpdk/spdk0/fbarray_memzone 01:21:40.137 Removing: /var/run/dpdk/spdk0/hugepage_info 01:21:40.137 Removing: /var/run/dpdk/spdk1/config 01:21:40.137 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 01:21:40.137 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 01:21:40.137 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 01:21:40.137 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 01:21:40.137 Removing: /var/run/dpdk/spdk1/fbarray_memzone 01:21:40.137 Removing: /var/run/dpdk/spdk1/hugepage_info 01:21:40.137 Removing: /var/run/dpdk/spdk2/config 01:21:40.137 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 01:21:40.137 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 01:21:40.137 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 01:21:40.137 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 01:21:40.137 Removing: /var/run/dpdk/spdk2/fbarray_memzone 01:21:40.137 Removing: /var/run/dpdk/spdk2/hugepage_info 01:21:40.137 Removing: /var/run/dpdk/spdk3/config 01:21:40.137 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 01:21:40.137 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 01:21:40.137 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 01:21:40.137 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 01:21:40.137 Removing: /var/run/dpdk/spdk3/fbarray_memzone 01:21:40.137 Removing: /var/run/dpdk/spdk3/hugepage_info 01:21:40.137 Removing: /var/run/dpdk/spdk4/config 01:21:40.137 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 01:21:40.137 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 01:21:40.137 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 01:21:40.137 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 01:21:40.137 Removing: /var/run/dpdk/spdk4/fbarray_memzone 01:21:40.137 Removing: /var/run/dpdk/spdk4/hugepage_info 01:21:40.137 Removing: /dev/shm/nvmf_trace.0 01:21:40.137 Removing: /dev/shm/spdk_tgt_trace.pid71274 01:21:40.137 Removing: /var/run/dpdk/spdk0 01:21:40.137 Removing: /var/run/dpdk/spdk1 01:21:40.137 Removing: /var/run/dpdk/spdk2 01:21:40.395 Removing: /var/run/dpdk/spdk3 01:21:40.395 Removing: /var/run/dpdk/spdk4 01:21:40.395 Removing: /var/run/dpdk/spdk_pid100158 01:21:40.395 Removing: /var/run/dpdk/spdk_pid100265 01:21:40.395 Removing: /var/run/dpdk/spdk_pid100283 01:21:40.395 Removing: /var/run/dpdk/spdk_pid71133 01:21:40.395 Removing: /var/run/dpdk/spdk_pid71274 01:21:40.395 Removing: /var/run/dpdk/spdk_pid71466 01:21:40.395 Removing: /var/run/dpdk/spdk_pid71553 01:21:40.395 Removing: /var/run/dpdk/spdk_pid71580 01:21:40.395 Removing: /var/run/dpdk/spdk_pid71684 01:21:40.395 Removing: /var/run/dpdk/spdk_pid71702 01:21:40.395 Removing: /var/run/dpdk/spdk_pid71826 01:21:40.395 Removing: /var/run/dpdk/spdk_pid72010 01:21:40.395 Removing: /var/run/dpdk/spdk_pid72151 01:21:40.395 Removing: /var/run/dpdk/spdk_pid72215 01:21:40.395 Removing: /var/run/dpdk/spdk_pid72286 01:21:40.395 Removing: /var/run/dpdk/spdk_pid72377 01:21:40.395 Removing: /var/run/dpdk/spdk_pid72454 01:21:40.395 Removing: /var/run/dpdk/spdk_pid72487 01:21:40.395 Removing: /var/run/dpdk/spdk_pid72522 01:21:40.395 Removing: /var/run/dpdk/spdk_pid72584 01:21:40.395 Removing: /var/run/dpdk/spdk_pid72689 01:21:40.395 Removing: /var/run/dpdk/spdk_pid73114 01:21:40.395 Removing: /var/run/dpdk/spdk_pid73166 01:21:40.395 Removing: /var/run/dpdk/spdk_pid73211 01:21:40.395 Removing: /var/run/dpdk/spdk_pid73227 01:21:40.395 Removing: /var/run/dpdk/spdk_pid73289 01:21:40.395 Removing: /var/run/dpdk/spdk_pid73305 01:21:40.395 Removing: /var/run/dpdk/spdk_pid73372 01:21:40.395 Removing: /var/run/dpdk/spdk_pid73382 01:21:40.395 Removing: /var/run/dpdk/spdk_pid73428 01:21:40.395 Removing: /var/run/dpdk/spdk_pid73448 01:21:40.395 Removing: /var/run/dpdk/spdk_pid73488 01:21:40.395 Removing: /var/run/dpdk/spdk_pid73506 01:21:40.395 Removing: /var/run/dpdk/spdk_pid73634 01:21:40.395 Removing: /var/run/dpdk/spdk_pid73664 01:21:40.395 Removing: /var/run/dpdk/spdk_pid73733 01:21:40.395 Removing: /var/run/dpdk/spdk_pid73790 01:21:40.395 Removing: /var/run/dpdk/spdk_pid73809 01:21:40.395 Removing: /var/run/dpdk/spdk_pid73873 01:21:40.395 Removing: /var/run/dpdk/spdk_pid73902 01:21:40.395 Removing: /var/run/dpdk/spdk_pid73938 01:21:40.395 Removing: /var/run/dpdk/spdk_pid73971 01:21:40.395 Removing: /var/run/dpdk/spdk_pid74006 01:21:40.395 Removing: /var/run/dpdk/spdk_pid74040 01:21:40.395 Removing: /var/run/dpdk/spdk_pid74069 01:21:40.396 Removing: /var/run/dpdk/spdk_pid74104 01:21:40.396 Removing: /var/run/dpdk/spdk_pid74138 01:21:40.396 Removing: /var/run/dpdk/spdk_pid74173 01:21:40.396 Removing: /var/run/dpdk/spdk_pid74203 01:21:40.396 Removing: /var/run/dpdk/spdk_pid74242 01:21:40.396 Removing: /var/run/dpdk/spdk_pid74271 01:21:40.396 Removing: /var/run/dpdk/spdk_pid74305 01:21:40.396 Removing: /var/run/dpdk/spdk_pid74340 01:21:40.396 Removing: /var/run/dpdk/spdk_pid74369 01:21:40.396 Removing: /var/run/dpdk/spdk_pid74409 01:21:40.396 Removing: /var/run/dpdk/spdk_pid74442 01:21:40.396 Removing: /var/run/dpdk/spdk_pid74484 01:21:40.396 Removing: /var/run/dpdk/spdk_pid74514 01:21:40.396 Removing: /var/run/dpdk/spdk_pid74544 01:21:40.396 Removing: /var/run/dpdk/spdk_pid74614 01:21:40.396 Removing: /var/run/dpdk/spdk_pid74707 01:21:40.396 Removing: /var/run/dpdk/spdk_pid75004 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75022 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75053 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75066 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75082 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75101 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75114 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75130 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75149 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75162 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75178 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75197 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75210 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75226 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75245 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75258 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75274 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75293 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75306 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75322 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75357 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75366 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75401 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75458 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75488 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75496 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75526 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75530 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75543 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75580 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75599 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75622 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75637 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75641 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75656 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75660 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75674 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75679 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75689 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75717 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75744 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75753 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75782 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75791 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75799 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75839 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75851 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75877 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75885 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75892 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75894 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75906 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75909 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75917 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75924 01:21:40.654 Removing: /var/run/dpdk/spdk_pid75998 01:21:40.654 Removing: /var/run/dpdk/spdk_pid76035 01:21:40.654 Removing: /var/run/dpdk/spdk_pid76134 01:21:40.654 Removing: /var/run/dpdk/spdk_pid76173 01:21:40.654 Removing: /var/run/dpdk/spdk_pid76218 01:21:40.654 Removing: /var/run/dpdk/spdk_pid76227 01:21:40.654 Removing: /var/run/dpdk/spdk_pid76249 01:21:40.654 Removing: /var/run/dpdk/spdk_pid76269 01:21:40.654 Removing: /var/run/dpdk/spdk_pid76295 01:21:40.654 Removing: /var/run/dpdk/spdk_pid76316 01:21:40.654 Removing: /var/run/dpdk/spdk_pid76386 01:21:40.654 Removing: /var/run/dpdk/spdk_pid76402 01:21:40.654 Removing: /var/run/dpdk/spdk_pid76436 01:21:40.654 Removing: /var/run/dpdk/spdk_pid76507 01:21:40.654 Removing: /var/run/dpdk/spdk_pid76552 01:21:40.912 Removing: /var/run/dpdk/spdk_pid76576 01:21:40.912 Removing: /var/run/dpdk/spdk_pid76678 01:21:40.912 Removing: /var/run/dpdk/spdk_pid76721 01:21:40.912 Removing: /var/run/dpdk/spdk_pid76753 01:21:40.912 Removing: /var/run/dpdk/spdk_pid76976 01:21:40.912 Removing: /var/run/dpdk/spdk_pid77064 01:21:40.912 Removing: /var/run/dpdk/spdk_pid77098 01:21:40.912 Removing: /var/run/dpdk/spdk_pid77409 01:21:40.912 Removing: /var/run/dpdk/spdk_pid77448 01:21:40.912 Removing: /var/run/dpdk/spdk_pid77729 01:21:40.912 Removing: /var/run/dpdk/spdk_pid78132 01:21:40.912 Removing: /var/run/dpdk/spdk_pid78391 01:21:40.912 Removing: /var/run/dpdk/spdk_pid79166 01:21:40.912 Removing: /var/run/dpdk/spdk_pid79983 01:21:40.912 Removing: /var/run/dpdk/spdk_pid80099 01:21:40.912 Removing: /var/run/dpdk/spdk_pid80162 01:21:40.912 Removing: /var/run/dpdk/spdk_pid81404 01:21:40.912 Removing: /var/run/dpdk/spdk_pid81616 01:21:40.912 Removing: /var/run/dpdk/spdk_pid84699 01:21:40.912 Removing: /var/run/dpdk/spdk_pid84995 01:21:40.912 Removing: /var/run/dpdk/spdk_pid85103 01:21:40.912 Removing: /var/run/dpdk/spdk_pid85236 01:21:40.912 Removing: /var/run/dpdk/spdk_pid85264 01:21:40.912 Removing: /var/run/dpdk/spdk_pid85286 01:21:40.912 Removing: /var/run/dpdk/spdk_pid85309 01:21:40.912 Removing: /var/run/dpdk/spdk_pid85401 01:21:40.912 Removing: /var/run/dpdk/spdk_pid85531 01:21:40.912 Removing: /var/run/dpdk/spdk_pid85677 01:21:40.912 Removing: /var/run/dpdk/spdk_pid85752 01:21:40.912 Removing: /var/run/dpdk/spdk_pid85934 01:21:40.912 Removing: /var/run/dpdk/spdk_pid86017 01:21:40.912 Removing: /var/run/dpdk/spdk_pid86099 01:21:40.912 Removing: /var/run/dpdk/spdk_pid86418 01:21:40.912 Removing: /var/run/dpdk/spdk_pid86773 01:21:40.912 Removing: /var/run/dpdk/spdk_pid86783 01:21:40.912 Removing: /var/run/dpdk/spdk_pid88984 01:21:40.912 Removing: /var/run/dpdk/spdk_pid88992 01:21:40.912 Removing: /var/run/dpdk/spdk_pid89264 01:21:40.912 Removing: /var/run/dpdk/spdk_pid89278 01:21:40.912 Removing: /var/run/dpdk/spdk_pid89292 01:21:40.912 Removing: /var/run/dpdk/spdk_pid89328 01:21:40.912 Removing: /var/run/dpdk/spdk_pid89333 01:21:40.912 Removing: /var/run/dpdk/spdk_pid89411 01:21:40.912 Removing: /var/run/dpdk/spdk_pid89413 01:21:40.912 Removing: /var/run/dpdk/spdk_pid89521 01:21:40.912 Removing: /var/run/dpdk/spdk_pid89527 01:21:40.912 Removing: /var/run/dpdk/spdk_pid89636 01:21:40.912 Removing: /var/run/dpdk/spdk_pid89639 01:21:40.912 Removing: /var/run/dpdk/spdk_pid90031 01:21:40.912 Removing: /var/run/dpdk/spdk_pid90078 01:21:40.912 Removing: /var/run/dpdk/spdk_pid90178 01:21:40.912 Removing: /var/run/dpdk/spdk_pid90256 01:21:40.912 Removing: /var/run/dpdk/spdk_pid90550 01:21:40.912 Removing: /var/run/dpdk/spdk_pid90751 01:21:40.912 Removing: /var/run/dpdk/spdk_pid91125 01:21:40.912 Removing: /var/run/dpdk/spdk_pid91625 01:21:40.912 Removing: /var/run/dpdk/spdk_pid92386 01:21:40.913 Removing: /var/run/dpdk/spdk_pid92972 01:21:40.913 Removing: /var/run/dpdk/spdk_pid92980 01:21:41.170 Removing: /var/run/dpdk/spdk_pid94869 01:21:41.170 Removing: /var/run/dpdk/spdk_pid94926 01:21:41.170 Removing: /var/run/dpdk/spdk_pid94981 01:21:41.170 Removing: /var/run/dpdk/spdk_pid95041 01:21:41.170 Removing: /var/run/dpdk/spdk_pid95155 01:21:41.170 Removing: /var/run/dpdk/spdk_pid95212 01:21:41.170 Removing: /var/run/dpdk/spdk_pid95269 01:21:41.170 Removing: /var/run/dpdk/spdk_pid95327 01:21:41.170 Removing: /var/run/dpdk/spdk_pid95646 01:21:41.170 Removing: /var/run/dpdk/spdk_pid96799 01:21:41.170 Removing: /var/run/dpdk/spdk_pid96933 01:21:41.170 Removing: /var/run/dpdk/spdk_pid97181 01:21:41.170 Removing: /var/run/dpdk/spdk_pid97729 01:21:41.170 Removing: /var/run/dpdk/spdk_pid97888 01:21:41.170 Removing: /var/run/dpdk/spdk_pid98045 01:21:41.170 Removing: /var/run/dpdk/spdk_pid98142 01:21:41.170 Removing: /var/run/dpdk/spdk_pid98316 01:21:41.170 Removing: /var/run/dpdk/spdk_pid98425 01:21:41.170 Removing: /var/run/dpdk/spdk_pid99078 01:21:41.170 Removing: /var/run/dpdk/spdk_pid99113 01:21:41.170 Removing: /var/run/dpdk/spdk_pid99154 01:21:41.170 Removing: /var/run/dpdk/spdk_pid99408 01:21:41.170 Removing: /var/run/dpdk/spdk_pid99443 01:21:41.170 Removing: /var/run/dpdk/spdk_pid99473 01:21:41.170 Removing: /var/run/dpdk/spdk_pid99908 01:21:41.170 Removing: /var/run/dpdk/spdk_pid99924 01:21:41.170 Clean 01:21:41.170 11:18:46 -- common/autotest_common.sh@1451 -- # return 0 01:21:41.170 11:18:46 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 01:21:41.170 11:18:46 -- common/autotest_common.sh@728 -- # xtrace_disable 01:21:41.170 11:18:46 -- common/autotest_common.sh@10 -- # set +x 01:21:41.170 11:18:46 -- spdk/autotest.sh@386 -- # timing_exit autotest 01:21:41.170 11:18:46 -- common/autotest_common.sh@728 -- # xtrace_disable 01:21:41.170 11:18:46 -- common/autotest_common.sh@10 -- # set +x 01:21:41.449 11:18:46 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 01:21:41.449 11:18:46 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 01:21:41.449 11:18:46 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 01:21:41.449 11:18:46 -- spdk/autotest.sh@391 -- # hash lcov 01:21:41.449 11:18:46 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 01:21:41.449 11:18:46 -- spdk/autotest.sh@393 -- # hostname 01:21:41.449 11:18:46 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 01:21:41.449 geninfo: WARNING: invalid characters removed from testname! 01:22:07.990 11:19:12 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:22:11.271 11:19:16 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:22:13.803 11:19:18 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:22:15.846 11:19:20 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:22:17.747 11:19:22 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:22:19.647 11:19:24 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:22:22.178 11:19:26 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 01:22:22.178 11:19:27 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:22:22.178 11:19:27 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 01:22:22.178 11:19:27 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:22:22.178 11:19:27 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 01:22:22.178 11:19:27 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:22:22.178 11:19:27 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:22:22.178 11:19:27 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:22:22.178 11:19:27 -- paths/export.sh@5 -- $ export PATH 01:22:22.178 11:19:27 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:22:22.178 11:19:27 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 01:22:22.178 11:19:27 -- common/autobuild_common.sh@447 -- $ date +%s 01:22:22.178 11:19:27 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721647167.XXXXXX 01:22:22.178 11:19:27 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721647167.zPWKEG 01:22:22.178 11:19:27 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 01:22:22.178 11:19:27 -- common/autobuild_common.sh@453 -- $ '[' -n v23.11 ']' 01:22:22.178 11:19:27 -- common/autobuild_common.sh@454 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 01:22:22.178 11:19:27 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 01:22:22.178 11:19:27 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 01:22:22.178 11:19:27 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 01:22:22.178 11:19:27 -- common/autobuild_common.sh@463 -- $ get_config_params 01:22:22.178 11:19:27 -- common/autotest_common.sh@396 -- $ xtrace_disable 01:22:22.178 11:19:27 -- common/autotest_common.sh@10 -- $ set +x 01:22:22.178 11:19:27 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 01:22:22.178 11:19:27 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 01:22:22.178 11:19:27 -- pm/common@17 -- $ local monitor 01:22:22.178 11:19:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:22:22.178 11:19:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:22:22.178 11:19:27 -- pm/common@25 -- $ sleep 1 01:22:22.178 11:19:27 -- pm/common@21 -- $ date +%s 01:22:22.178 11:19:27 -- pm/common@21 -- $ date +%s 01:22:22.178 11:19:27 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721647167 01:22:22.179 11:19:27 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721647167 01:22:22.179 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721647167_collect-vmstat.pm.log 01:22:22.179 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721647167_collect-cpu-load.pm.log 01:22:23.111 11:19:28 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 01:22:23.111 11:19:28 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 01:22:23.111 11:19:28 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 01:22:23.111 11:19:28 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 01:22:23.111 11:19:28 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 01:22:23.111 11:19:28 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 01:22:23.111 11:19:28 -- spdk/autopackage.sh@19 -- $ timing_finish 01:22:23.111 11:19:28 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 01:22:23.111 11:19:28 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 01:22:23.111 11:19:28 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 01:22:23.111 11:19:28 -- spdk/autopackage.sh@20 -- $ exit 0 01:22:23.111 11:19:28 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 01:22:23.111 11:19:28 -- pm/common@29 -- $ signal_monitor_resources TERM 01:22:23.111 11:19:28 -- pm/common@40 -- $ local monitor pid pids signal=TERM 01:22:23.111 11:19:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:22:23.111 11:19:28 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 01:22:23.111 11:19:28 -- pm/common@44 -- $ pid=102089 01:22:23.111 11:19:28 -- pm/common@50 -- $ kill -TERM 102089 01:22:23.111 11:19:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:22:23.111 11:19:28 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 01:22:23.111 11:19:28 -- pm/common@44 -- $ pid=102091 01:22:23.111 11:19:28 -- pm/common@50 -- $ kill -TERM 102091 01:22:23.111 + [[ -n 5850 ]] 01:22:23.111 + sudo kill 5850 01:22:23.119 [Pipeline] } 01:22:23.138 [Pipeline] // timeout 01:22:23.143 [Pipeline] } 01:22:23.162 [Pipeline] // stage 01:22:23.167 [Pipeline] } 01:22:23.185 [Pipeline] // catchError 01:22:23.195 [Pipeline] stage 01:22:23.197 [Pipeline] { (Stop VM) 01:22:23.212 [Pipeline] sh 01:22:23.524 + vagrant halt 01:22:26.822 ==> default: Halting domain... 01:22:33.392 [Pipeline] sh 01:22:33.685 + vagrant destroy -f 01:22:36.966 ==> default: Removing domain... 01:22:36.975 [Pipeline] sh 01:22:37.245 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 01:22:37.254 [Pipeline] } 01:22:37.273 [Pipeline] // stage 01:22:37.279 [Pipeline] } 01:22:37.296 [Pipeline] // dir 01:22:37.301 [Pipeline] } 01:22:37.317 [Pipeline] // wrap 01:22:37.324 [Pipeline] } 01:22:37.338 [Pipeline] // catchError 01:22:37.346 [Pipeline] stage 01:22:37.348 [Pipeline] { (Epilogue) 01:22:37.361 [Pipeline] sh 01:22:37.637 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 01:22:42.915 [Pipeline] catchError 01:22:42.917 [Pipeline] { 01:22:42.931 [Pipeline] sh 01:22:43.212 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 01:22:43.471 Artifacts sizes are good 01:22:43.480 [Pipeline] } 01:22:43.498 [Pipeline] // catchError 01:22:43.511 [Pipeline] archiveArtifacts 01:22:43.519 Archiving artifacts 01:22:43.648 [Pipeline] cleanWs 01:22:43.660 [WS-CLEANUP] Deleting project workspace... 01:22:43.661 [WS-CLEANUP] Deferred wipeout is used... 01:22:43.668 [WS-CLEANUP] done 01:22:43.670 [Pipeline] } 01:22:43.689 [Pipeline] // stage 01:22:43.695 [Pipeline] } 01:22:43.713 [Pipeline] // node 01:22:43.719 [Pipeline] End of Pipeline 01:22:43.759 Finished: SUCCESS